解决oracle数据文件路径有回车故障

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:解决oracle数据文件路径有回车故障

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近遇到一个硬件恢复朋友的请求,oracle数据库文件恢复出来了,但是在linux上面启动的时候,有两个文件无法检测到,dbv检测正常.
checkpiont_err
dbv


通过分析是由于文件无法找到原因导致
file-not-found

进一步检查发现原库这两个文件结尾带有回车,但是恢复出来的文件不带回车
huiche

对于这个故障,我在测试环境进行了重现并且给予解决
1. 创建带回车键数据文件

SQL> create tablespace xifenfei datafile '/u01/app/oracle/oradata/xifenfei/xff01.dbf
  2  ' size 128m;

Tablespace created.

SQL> alter tablespace xifenfei add datafile '/u01/app/oracle/oradata/xifenfei/xff02.dbf' size 128M;

Tablespace altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/xifenfei/system01.dbf
/u01/app/oracle/oradata/xifenfei/sysaux01.dbf
/u01/app/oracle/oradata/xifenfei/undotbs01.dbf
/u01/app/oracle/oradata/xifenfei/users01.dbf
/u01/app/oracle/oradata/xifenfei/xff01.dbf
/u01/app/oracle/oradata/xifenfei/xff02.dbf

6 rows selected.

2.操作系统层面查看文件(在我的ssh工具中,可以看到带回车键文件和不带回车文件不一样,使用的是crt工具,其他工具是否显示不确定)

[oracle@xifenfei ~]$ cd /u01/app/oracle/oradata/xifenfei/
[oracle@xifenfei xifenfei]$ ls -l xff*
-rw-r----- 1 oracle oinstall 134225920 Dec 14 08:05 xff01.dbf?
-rw-r----- 1 oracle oinstall 134225920 Dec 14 08:05 xff02.dbf

3. 操作系统层面重命名数据文件

[oracle@xifenfei xifenfei]$ mv xff01.dbf* xff01.dbf
[oracle@xifenfei xifenfei]$ ls -l xff*
-rw-r----- 1 oracle oinstall 134225920 Dec 14 08:05 xff01.dbf
-rw-r----- 1 oracle oinstall 134225920 Dec 14 08:05 xff02.dbf

3. 数据库层面重启看文件情况,发现文件不能被正常发现(当然不能,文件被os层面mv了)

SQL> shutdown abort;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area  551165952 bytes
Fixed Size                  2255112 bytes
Variable Size             369100536 bytes
Database Buffers          171966464 bytes
Redo Buffers                7843840 bytes
Database mounted.
SQL> select file#, CHECKPOINT_CHANGE# from v$datafile_header;

     FILE# CHECKPOINT_CHANGE#
---------- ------------------
         1          306775013
         2          306775013
         3          306775013
         4          306775013
         5                  0
         6          306779423

6 rows selected.

RMAN> report schema;

Report of database schema for database with db_unique_name XIFENFEI

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    770      SYSTEM               ***     /u01/app/oracle/oradata/xifenfei/system01.dbf
2    1950     SYSAUX               ***     /u01/app/oracle/oradata/xifenfei/sysaux01.dbf
3    70       UNDOTBS1             ***     /u01/app/oracle/oradata/xifenfei/undotbs01.dbf
4    12       USERS                ***     /u01/app/oracle/oradata/xifenfei/users01.dbf
5    0        XIFENFEI             ***     /u01/app/oracle/oradata/xifenfei/xff01.dbf

6    128      XIFENFEI             ***     /u01/app/oracle/oradata/xifenfei/xff02.dbf

4. 解决控制文件和数据文件实际名称不一致问题

RMAN> catalog datafilecopy '/u01/app/oracle/oradata/xifenfei/xff01.dbf';

using target database control file instead of recovery catalog
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/xifenfei/xff01.dbf RECID=1 STAMP=1187684217

RMAN> switch datafile 5 to copy;

datafile 5 switched to datafile copy "/u01/app/oracle/oradata/xifenfei/xff01.dbf"

RMAN> report schema;

Report of database schema for database with db_unique_name XIFENFEI

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    770      SYSTEM               ***     /u01/app/oracle/oradata/xifenfei/system01.dbf
2    1950     SYSAUX               ***     /u01/app/oracle/oradata/xifenfei/sysaux01.dbf
3    70       UNDOTBS1             ***     /u01/app/oracle/oradata/xifenfei/undotbs01.dbf
4    12       USERS                ***     /u01/app/oracle/oradata/xifenfei/users01.dbf
5    128      XIFENFEI             ***     /u01/app/oracle/oradata/xifenfei/xff01.dbf
6    128      XIFENFEI             ***     /u01/app/oracle/oradata/xifenfei/xff02.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    123      TEMP                 32767       /u01/app/oracle/oradata/xifenfei/temp01.dbf


RMAN> alter database open;

database opened

ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一个10g的库应用访问报ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
ORA-12514


通过分析alert日志,确认是数据库启动报ORA-600 4194错误

Mon Sep 23 16:12:42 2024
SMON: enabling cache recovery
Mon Sep 23 16:12:43 2024
Successfully onlined Undo Tablespace 1.
Mon Sep 23 16:12:43 2024
SMON: enabling tx recovery
Mon Sep 23 16:12:43 2024
Database Characterset is ZHS16GBK
Mon Sep 23 16:12:43 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\udump\xifenfei_ora_7832.trc:
ORA-00600: 内部错误代码, 参数: [4194], [66], [50], [], [], [], [], []

DEBUG: Replaying xcb 0xae312888, pmd 0x9058f4d4 for failed op 8
Doing block recovery for file 2 block 5547
No block recovery was needed
Mon Sep 23 16:13:31 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\udump\xifenfei_ora_7832.trc:
ORA-00600: 内部错误代码, 参数: [4194], [66], [50], [], [], [], [], []
ORA-00600: 内部错误代码, 参数: [4194], [66], [50], [], [], [], [], []

Mon Sep 23 16:13:32 2024
DEBUG: Replaying xcb 0xae312888, pmd 0x9058f4d4 for failed op 8
Mon Sep 23 16:13:32 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\udump\xifenfei_ora_7832.trc:
ORA-00600: 内部错误代码, 参数: [4194], [66], [50], [], [], [], [], []
ORA-00600: 内部错误代码, 参数: [4194], [66], [50], [], [], [], [], []

Doing block recovery for file 2 block 5547
No block recovery was needed
Mon Sep 23 16:13:33 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\udump\xifenfei_ora_7832.trc:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [4194], [66], [50], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [66], [50], [], [], [], [], []

Mon Sep 23 16:14:18 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_smon_5880.trc:
ORA-00600: internal error code, arguments: [4194], [66], [50], [], [], [], [], []

Mon Sep 23 16:14:19 2024
DEBUG: Replaying xcb 0xae312888, pmd 0x9058f4d4 for failed op 8
Mon Sep 23 16:14:19 2024
Non-fatal internal error happenned while SMON was doing shrinking of rollback segments.
SMON encountered 1 out of maximum 100 non-fatal internal errors.
Mon Sep 23 16:14:19 2024
Doing block recovery for file 2 block 5547
No block recovery was needed
Mon Sep 23 16:15:06 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_pmon_6952.trc:
ORA-00600: internal error code, arguments: [4194], [66], [50], [], [], [], [], []

Mon Sep 23 16:15:06 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_pmon_6952.trc:
ORA-00600: internal error code, arguments: [4194], [66], [50], [], [], [], [], []

Mon Sep 23 16:15:06 2024
PMON: terminating instance due to error 472
Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_psp0_2104.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_lgwr_3200.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_dbw1_448.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_dbw0_7436.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_mman_1704.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_dbw2_5072.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_ckpt_6628.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_reco_7924.trc:
ORA-00472: PMON  process terminated with error

Mon Sep 23 16:15:07 2024
Errors in file d:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_smon_5880.trc:
ORA-00472: PMON  process terminated with error

Instance terminated by PMON, pid = 6952

这个比较简单一般就是undo异常,对undo设置为人工管理,然后重建undo完成本次恢复任务

数据库启动报ORA-27102 OSD-00026 O/S-Error: (OS 1455)

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:数据库启动报ORA-27102 OSD-00026 O/S-Error: (OS 1455)

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一个win上运行的11.2.0.4库启动的时候报ORA-27102 OSD-00026 O/S-Error: (OS 1455)错误

SQL> STARTUP
ORA-27102: out of memory
OSD-00026: 附加错误信息
O/S-Error: (OS 1455) 页面文件太小,无法完成操作。

数据库alert日志信息

Fri Sep 13 17:17:39 2024
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 24
Number of processor cores in the system is 12
Number of processor sockets in the system is 1
Picked latch-free SCN scheme 3
Error: Failed to allocate SGA granule addr 0000000760000000 size 268435456 
 mode 131073 locality 0
Errors in file D:\app\Administrator\diag\rdbms\xff\xff\trace\xff_ora_77728.trc:
ORA-27102: out of memory
OSD-00026: 附加错误信息
O/S-Error: (OS 1455) 页面文件太小,无法完成操作。
Error: Failed to allocate SGA granule addr 0000000750000000 size 268435456 
 mode 131073 locality 0
Errors in file D:\app\Administrator\diag\rdbms\xff\xff\trace\xff_ora_77728.trc:

看报错信息,第一感觉和内存有关系,可能内存不足无法满足sga分配需求,查看系统空闲内存情况
22


系统明显有足够内存,出现该问题的原因可能和win操作系统本身有关系,由于要快速恢复业务,直接重启系统数据库启动成功

硬件故障导致ORA-01242 ORA-01122等错误

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:硬件故障导致ORA-01242 ORA-01122等错误

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户多个节点rac,早上反馈说有两个节点实例异常,需要分析原因,查看其中一个节点的数据库alert日志,发现是由于访问1399号文件异常报ORA-01242 ORA-01122等错误,导致实例crash

Mon Aug 19 20:48:02 2024
Read of datafile '+DATA/xifenfei_01-157.dbf' (fno 1399) header failed with ORA-01207
Rereading datafile 1399 header failed with ORA-01207
Errors in file /u01/app/oracle/diag/rdbms/xff/xff6/trace/xff6_ckpt_75582.trc:
ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
ORA-01122: database file 1399 failed verification check
ORA-01110: data file 1399: '+DATA/xifenfei_01-157.dbf'
ORA-01207: file is more recent than control file - old control file
Errors in file /u01/app/oracle/diag/rdbms/xff/xff6/trace/xff6_ckpt_75582.trc:
ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
ORA-01122: database file 1399 failed verification check
ORA-01110: data file 1399: '+DATA/xifenfei_01-157.dbf'
ORA-01207: file is more recent than control file - old control file
CKPT (ospid: 75582): terminating the instance due to error 1242
Mon Aug 19 20:48:02 2024
System state dump requested by (instance=6, osid=75582 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/xff/xff6/trace/xff6_diag_75520.trc
Termination issued to instance processes. Waiting for the processes to exit
Mon Aug 19 20:48:13 2024
ORA-1092 : opitsk aborting process

继续分析日志发现集群尝试拉起该实例,遭遇ORA-01186,ORA-01122无法启动成功

ALTER DATABASE OPEN /* db agent *//* {0:6:39} */
Mon Aug 19 20:49:34 2024
SUCCESS: diskgroup DATA was mounted
Mon Aug 19 20:49:34 2024
NOTE: dependency between database xff and diskgroup resource ora.DATA.dg is established
Mon Aug 19 20:50:41 2024
Picked broadcast on commit scheme to generate SCNs
Mon Aug 19 20:50:42 2024
Read of datafile '+DATA/xifenfei_01-157.dbf' (fno 1399) header failed with ORA-01207
Rereading datafile 1399 header failed with ORA-01207
Errors in file /u01/app/oracle/diag/rdbms/xff/xff6/trace/xff6_dbw0_29208.trc:
ORA-01186: file 1399 failed verification tests
ORA-01122: database file 1399 failed verification check
ORA-01110: data file 1399: '+DATA/xifenfei_01-157.dbf'
ORA-01207: file is more recent than control file - old control file
File 1399 not verified due to error ORA-01122

这个错误是数据库文件访问异常导致,根据经验,出现这种问题一般是由于底层异常导致,查看系统messages日志,发现有硬件磁盘报错

Aug 19 20:41:58 xff6 fcoemon: FC_HOST_EVENT 6894 at 1724071318 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:41:58 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:41:58 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:41:58 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:41:58 xff6 kernel: <<vendor>> ASC=0xe0 ASCQ=0x1ASC=0xe0 ASCQ=0x1
Aug 19 20:42:03 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:42:03 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:03 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:42:03 xff6 kernel: <<vendor>> ASC=0xe0 ASCQ=0x1ASC=0xe0 ASCQ=0x1
Aug 19 20:42:03 xff6 fcoemon: FC_HOST_EVENT 6895 at 1724071323 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:07 xff6 fcoemon: FC_HOST_EVENT 6896 at 1724071327 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:07 xff6 kernel: sd 1:0:0:44: [sdat]  
Aug 19 20:42:07 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:07 xff6 kernel: sd 1:0:0:44: [sdat]  
Aug 19 20:42:07 xff6 kernel: <<vendor>> ASC=0xe0 ASCQ=0x1ASC=0xe0 ASCQ=0x1
Aug 19 20:42:12 xff6 fcoemon: FC_HOST_EVENT 6897 at 1724071332 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:12 xff6 kernel: sd 1:0:0:44: [sdat]  
Aug 19 20:42:12 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:12 xff6 kernel: sd 1:0:0:44: [sdat]  
Aug 19 20:42:12 xff6 kernel: <<vendor>> ASC=0xe0 ASCQ=0x1ASC=0xe0 ASCQ=0x1
Aug 19 20:42:25 xff6 fcoemon: FC_HOST_EVENT 6898 at 1724071345 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:25 xff6 kernel: sd 1:0:0:42: [sdar]  
Aug 19 20:42:25 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:25 xff6 kernel: sd 1:0:0:42: [sdar]  
Aug 19 20:42:25 xff6 kernel: <<vendor>> ASC=0xe0 ASCQ=0x1ASC=0xe0 ASCQ=0x1
Aug 19 20:42:41 xff6 fcoemon: FC_HOST_EVENT 6899 at 1724071361 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:42: [sdar]  
Aug 19 20:42:41 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:42: [sdar]  
Aug 19 20:42:41 xff6 kernel: <<vendor>> ASC=0xd0 ASCQ=0x6ASC=0xd0 ASCQ=0x6
Aug 19 20:42:41 xff6 fcoemon: FC_HOST_EVENT 6900 at 1724071361 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:41 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:41 xff6 kernel: <<vendor>> ASC=0x95 ASCQ=0x1ASC=0x95 ASCQ=0x1
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:41 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:41 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:41 xff6 kernel: <<vendor>> ASC=0xd0 ASCQ=0x6ASC=0xd0 ASCQ=0x6
Aug 19 20:42:41 xff6 fcoemon: FC_HOST_EVENT 6901 at 1724071361 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:53 xff6 fcoemon: FC_HOST_EVENT 6902 at 1724071373 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:42:53 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:53 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:42:53 xff6 kernel: sd 1:0:0:41: [sdaq]  
Aug 19 20:42:53 xff6 kernel: <<vendor>> ASC=0x95 ASCQ=0x1ASC=0x95 ASCQ=0x1
Aug 19 20:43:03 xff6 kernel: sd 1:0:0:40: [sdap]  
Aug 19 20:43:03 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:43:03 xff6 kernel: sd 1:0:0:40: [sdap]  
Aug 19 20:43:03 xff6 kernel: <<vendor>> ASC=0x95 ASCQ=0x1ASC=0x95 ASCQ=0x1
Aug 19 20:43:03 xff6 fcoemon: FC_HOST_EVENT 6903 at 1724071383 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:43:03 xff6 fcoemon: FC_HOST_EVENT 6904 at 1724071383 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:43:03 xff6 fcoemon: FC_HOST_EVENT 6905 at 1724071383 secs on host1:code 65535=vendor_unique datalen 32 data=512
Aug 19 20:43:03 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:43:03 xff6 kernel: Sense Key : Recovered Error [current] 
Aug 19 20:43:03 xff6 kernel: sd 1:0:0:43: [sdas]  
Aug 19 20:43:03 xff6 kernel: <<vendor>> ASC=0x95 ASCQ=0x1ASC=0x95 ASCQ=0x1
Aug 19 20:49:26 xff6 kernel: scsi_verify_blk_ioctl: 683 callbacks suppressed

客户进一步分析是由于昨天存储坏了一块盘,然后热备盘顶上了,但是不知道什么原因出现了文件访问异常,可能和当时的rebuild过程有关系.由于客户是rac环境,还有部分剩余节点运行正常,对于异常节点直接启动库成功
20240820-182825


节点写入数据报ORA-01187: cannot read from file because it failed verification tests错误
ora-01187

在所有节点通过执行ALTER SYSTEM CHECK DATAFILES,然后所有节点操作正常
check_datafile

Oracle 支持GB18030-2022

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:Oracle 支持GB18030-2022

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

GB18030 是中国政府标准,定义了在中国软件所需的语言和字符集支持. 它是一种与 ISO 10646/Unicode 相匹配的 Unicode 转换格式,并提供涵盖所有 Unicode 的字符库.它最初于 2000 年发布 (GB18030-2000),随后在 2005 年 (GB18030-2005) 和 2022 年 (GB18030-2022) 发布了两次更新.GB18030-2022 在字符覆盖方面的内容等同于Unicode Version 11.0. GB18030-2022 标准的生效日期为 2023 年 8 月 1 日.Oracle 数据库通过 AL32UTF8 Unicode 数据库字符集支持 GB18030 字符的存储.同时还提供客户端专用字符集 ZHS32GB18030,使应用程序可以在客户端处理 GB18030 编码文本的输入/输出,并根据需要进行字符集转换. 截至 2023 年 3 月,现有 Oracle 数据库版本中 ZHS32GB18030 的实施是基于 GB18030-2005.
Oracle Database 23ai 版本中已实现对最新 GB18030-2022 标准的支持.具体来说,Oracle Database 23ai 支持 GB18030-2022 标准的级别 3 的实现,这是 GB18030 最广泛的支持级别. 与此同时,我们还为现有客户在 Oracle Database 19c 各 RU 版本之上通过的一次性补丁来支持 GB18030-2022.Oracle Database 19c 如果要支持 GB18030-2022 需要下载补丁 (#34994751).它可以应用于 Oracle 19c RU 版本 19.3 或更高版本.它需要应用于数据库服务器和客户端. 此补丁仅启用对 Oracle 数据库的 GB18030-2022 支持. 为了使应用程序完全支持 GB18030-2022,应用程序堆栈中的每个组件都需要能够确保处理 GB18030-2022 中的字符.
ZHS32GB18030字符集是仅限客户端的字符集,因此不支持将 ZHS32GB18030 用作NLS_CHARACTERSET.如果您的数据库使用ZHS32GB18030作为NLS_CHARACTERSET,那么强烈建议您尽快迁移到 AL32UTF8.
GB18030编码定义了完整的 Unicode 映射,这意味着如果在 GB18030客户端上使用设置为ZHS32GB18030的NLS_LANG并使用 AL32UTF8(或 UTF8)NLS_CHARACTERSET数据将相互转换和从 AL32UTF8 转换为 并完全保留.在使用非 Unicode GB18030编码和设置为 ZHS32GB18030 的 NLS_LANG 的 GB18030客户端上插入 GB18030数据时,Oracle 会将非 Unicode GB18030代码转换为AL32UTF8代码,并将其作为 AL32UTF8 代码存储在数据库中.在使用非 Unicode GB18030编码和设置为 ZHS32GB18030 的 NLS_LANG 的 GB18030客户端上选择 GB18030-2000 数据时,Oracle 会将 GB18030数据从 AL32UTF8 代码转换为非 Unicode GB18030代码提供给客户端,因此,如果此客户端使用非 Unicode GB18030代码,则数据库端的存储AL32UTF8对客户端完全透明.
通过以下sql验证你的数据库是否支持GB18030-2022
以下结果表示支持GB18030-2022

[oracle@xifenfei ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Thu Aug 1 21:02:36 2024
Version 23.5.0.24.07

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.5.0.24.07

SQL> SELECT DUMP(CONVERT(UNISTR('\FE10\FFFD\E78D'), 'ZHS32GB18030', 'AL16UTF16'), 16) FROM DUAL;

DUMP(CONVERT(UNISTR('\FE10\FFFD\E78D'),'ZHS32GB18030','AL16UTF16'),16)
--------------------------------------------------------------------------------
Typ=1 Len=10: a6,d9,84,31,a4,37,84,31,82,36

以下结果不表示支持GB18030-2022

[oracle@iZbp11c0qyuuo1gr7j98upZ ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Aug 1 20:46:12 2024

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> SELECT DUMP(CONVERT(UNISTR('\FE10\FFFD\E78D'), 'ZHS32GB18030', 'AL16UTF16'), 16) FROM DUAL;

DUMP(CONVERT(UNISTR('\FE10\FFFD\E78D
------------------------------------
Typ=1 Len=8: 84,31,82,36,a3,bf,a6,d9

参考文档:Oracle 数据库 GB18030-2022 支持方针 (Doc ID 2937409.1)

手工对multipath设备进行授权导致asm 磁盘组mount报ORA-15032-ORA-15131

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:手工对multipath设备进行授权导致asm 磁盘组mount报ORA-15032-ORA-15131

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户硬件通过底层重组raid,然后把lun进行到asm的机器上,在mount data_dg磁盘组的时候,报ORA-15032 ORA-15131错误,磁盘组无法正常mount,这种报错不太常见,一般要不直接报某个block无法访问,要不直接报缺少asm disk之类的.
ORA-15131


通过远程上去分析,发现alert日志如下

Wed Jul 31 04:55:17 2024
NOTE: attached to recovery domain 1
NOTE: cache recovered group 1 to fcn 0.1814063801
NOTE: redo buffer size is 256 blocks (1053184 bytes)
Wed Jul 31 04:55:17 2024
NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (DATA_DG)
Errors in file /oracle/u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_lgwr_8681.trc:
ORA-15025: could not open disk "/dev/mapper/xffdb_data01_new"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Errors in file /oracle/u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_lgwr_8681.trc:
ORA-15025: could not open disk "/dev/mapper/xffdb_data01_new"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
WARNING: cache failed reading from group=1(DATA_DG) fn=1 blk=3 count=1 from disk= 0 
  (DATA_DG_0000) kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=11596
Errors in file /oracle/u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_lgwr_8681.trc:
ORA-15025: could not open disk "/dev/mapper/xffdb_data01_new"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
ORA-15080: synchronous I/O operation to a disk failed
ERROR: cache failed to read group=1(DATA_DG) fn=1 blk=3 from disk(s): 0(DATA_DG_0000)
ORA-15080: synchronous I/O operation to a disk failed
NOTE: cache initiating offline of disk 0 group DATA_DG
NOTE: process _lgwr_+asm2 (8681) initiating offline of disk 0.3915927124 (DATA_DG_0000) with mask 0x7e in group 1
NOTE: initiating PST update: grp = 1, dsk = 0/0xe9684e54, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 42 for pid 15, osid 8681
ERROR: Disk 0 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 1)
WARNING: Offline for disk DATA_DG_0000 in mode 0x7f failed.
Wed Jul 31 04:55:17 2024
NOTE: halting all I/Os to diskgroup 1 (DATA_DG)
NOTE: LGWR caught ORA-15131 while mounting diskgroup 1
ORA-15080: synchronous I/O operation to a disk failed
NOTE: cache initiating offline of disk 0 group DATA_DG
NOTE: process _lgwr_+asm2 (8681) initiating offline of disk 0.3915927124 (DATA_DG_0000) with mask 0x7e in group 1
NOTE: initiating PST update: grp = 1, dsk = 0/0xe9684e54, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 42 for pid 15, osid 8681
ERROR: Disk 0 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 1)
WARNING: Offline for disk DATA_DG_0000 in mode 0x7f failed.
Wed Jul 31 04:55:17 2024
NOTE: halting all I/Os to diskgroup 1 (DATA_DG)
NOTE: LGWR caught ORA-15131 while mounting diskgroup 1
ERROR: ORA-15131 signalled during mount of diskgroup DATA_DG
NOTE: cache dismounting (clean) group 1/0xA868BD55 (DATA_DG)
NOTE: messaging CKPT to quiesce pins Unix process pid: 16915, image: oracle@xffdb2 (TNS V1-V3)
NOTE: lgwr not being msg'd to dismount
Wed Jul 31 04:55:18 2024
List of instances:
 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 9)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 1 invalid = TRUE
 2 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
freeing rdom 1
WARNING: dirty detached from domain 1
WARNING: thread recovery enqueue was not held for domain 1 when doing a dirty detach
NOTE: cache dismounted group 1/0xA868BD55 (DATA_DG)
NOTE: cache ending mount (fail) of group DATA_DG number=1 incarn=0xa868bd55
NOTE: cache deleting context for group DATA_DG 1/0xa868bd55
GMON dismounting group 1 at 43 for pid 29, osid 16915
NOTE: Disk DATA_DG_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_DG_0001 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_DG_0002 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_DG_0003 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_DG_0004 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_DG_0005 in mode 0x7f marked for de-assignment
ERROR: diskgroup DATA_DG was not mounted
ORA-15032: not all alterations performed
ORA-15131: block  of file  in diskgroup  could not be read
ERROR: alter diskgroup data_dg mount

基本上可以确认是由于访问/dev/mapper/xffdb_data01_new 磁盘权限不对导致读disk= 0 fn=1 blk=3失败(突然读这个block没有权限,而没有报最初的磁盘头无权限,有点不合常理),进一步分析确认是xffdb_data01_new 权限不对.

xffdb2:/oracle/u01/app/grid/diag/asm/+asm/+ASM2/trace$ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Jul 26 12:24 control
lrwxrwxrwx 1 root root      8 Jul 31 04:21 mpathe -> ../dm-17
lrwxrwxrwx 1 root root      7 Jul 31 04:28 mpathf -> ../dm-7
lrwxrwxrwx 1 root root      8 Jul 31 04:55 xffdb_data01_new -> ../dm-14
lrwxrwxrwx 1 root root      8 Jul 31 04:55 xffdb_data02_new -> ../dm-13
lrwxrwxrwx 1 root root      7 Jul 31 04:55 xffdb_data03 -> ../dm-2
lrwxrwxrwx 1 root root      7 Jul 31 04:55 xffdb_data04 -> ../dm-5
lrwxrwxrwx 1 root root      8 Jul 31 04:55 xffdb_data05_new -> ../dm-12
lrwxrwxrwx 1 root root      7 Jul 31 04:55 xffdb_data06 -> ../dm-6
lrwxrwxrwx 1 root root      8 Jul 31 04:28 xffdb_data07 -> ../dm-11
lrwxrwxrwx 1 root root      7 Jul 31 04:28 xffdb_data08 -> ../dm-9
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_log1 -> ../dm-4
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_log2 -> ../dm-3
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_vote2 -> ../dm-8
lrwxrwxrwx 1 root root      8 Jul 31 04:59 xffdb_vote3 -> ../dm-10
lrwxrwxrwx 1 root root      8 Jul 26 12:24 vgdata-lv_data -> ../dm-15
lrwxrwxrwx 1 root root      7 Jul 26 12:24 vg_xffdb2-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Jul 26 12:24 vg_xffdb2-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Jul 26 12:24 vg_xffdb2-LogVol02 -> ../dm-16
xffdb2:/oracle/u01/app/grid/diag/asm/+asm/+ASM2/trace$ls -l /dev/dm*
brw-rw---- 1 root disk     253,  0 Jul 26 12:24 /dev/dm-0
brw-rw---- 1 root disk     253,  1 Jul 26 12:24 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 10 Jul 31 05:13 /dev/dm-10
brw-rw---- 1 root disk     253, 11 Jul 31 04:28 /dev/dm-11
brw-rw---- 1 root disk     253, 12 Jul 31 04:55 /dev/dm-12
brw-rw---- 1 grid asmadmin 253, 13 Jul 31 04:55 /dev/dm-13
brw-rw---- 1 grid asmadmin 253, 14 Jul 31 04:55 /dev/dm-14
brw-rw---- 1 root disk     253, 15 Jul 26 12:24 /dev/dm-15
brw-rw---- 1 root disk     253, 16 Jul 26 12:24 /dev/dm-16
brw-rw---- 1 root disk     253, 17 Jul 31 04:21 /dev/dm-17
brw-rw---- 1 grid asmadmin 253,  2 Jul 31 04:55 /dev/dm-2
brw-rw---- 1 grid asmadmin 253,  3 Jul 31 04:59 /dev/dm-3
brw-rw---- 1 grid asmadmin 253,  4 Jul 31 05:13 /dev/dm-4
brw-rw---- 1 grid asmadmin 253,  5 Jul 31 04:55 /dev/dm-5
brw-rw---- 1 grid asmadmin 253,  6 Jul 31 04:55 /dev/dm-6
brw-rw---- 1 root disk     253,  7 Jul 31 04:28 /dev/dm-7
brw-rw---- 1 grid asmadmin 253,  8 Jul 31 05:13 /dev/dm-8
brw-rw---- 1 root disk     253,  9 Jul 31 04:28 /dev/dm-9

再进一步确认xffdb_*_new三个磁盘是硬件恢复之后镜像过来的,然后现场工程师直接人工修改/dev/dm_[12-14]权限,再尝试mount磁盘组,结果发生该错误,通过v$asm_disk再次查询asm disk情况,发现xffdb_*_new的磁盘均不在列表中

GROUP_NUMBER DISK_NUMBER HEADER_STATUS         STATE          PATH
------------ ----------- --------------------- -------------- --------------------------
           0           2 MEMBER                NORMAL         /dev/mapper/xffdb_data03
           0           3 MEMBER                NORMAL         /dev/mapper/xffdb_data06
           0           4 MEMBER                NORMAL         /dev/mapper/xffdb_data04
           3           1 MEMBER                NORMAL         /dev/mapper/xffdb_vote2
           2           0 MEMBER                NORMAL         /dev/mapper/xffdb_log1
           3           2 MEMBER                NORMAL         /dev/mapper/xffdb_vote3
           2           1 MEMBER                NORMAL         /dev/mapper/xffdb_log2

7 rows selected.

进一步查看磁盘权限

xffdb2:/dev/mapper$ls -ltr
total 0
crw-rw---- 1 root root 10, 58 Jul 26 12:24 control
lrwxrwxrwx 1 root root      7 Jul 26 12:24 vg_xffdb2-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Jul 26 12:24 vgdata-lv_data -> ../dm-15
lrwxrwxrwx 1 root root      7 Jul 26 12:24 vg_xffdb2-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      8 Jul 26 12:24 vg_xffdb2-LogVol02 -> ../dm-16
lrwxrwxrwx 1 root root      8 Jul 31 04:21 mpathe -> ../dm-17
lrwxrwxrwx 1 root root      7 Jul 31 04:28 xffdb_data08 -> ../dm-9
lrwxrwxrwx 1 root root      8 Jul 31 04:28 xffdb_data07 -> ../dm-11
lrwxrwxrwx 1 root root      7 Jul 31 04:28 mpathf -> ../dm-7
lrwxrwxrwx 1 root root      8 Jul 31 04:55 xffdb_data05_new -> ../dm-12
lrwxrwxrwx 1 root root      8 Jul 31 04:59 xffdb_vote3 -> ../dm-10
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_vote2 -> ../dm-8
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_log2 -> ../dm-3
lrwxrwxrwx 1 root root      7 Jul 31 04:59 xffdb_log1 -> ../dm-4
lrwxrwxrwx 1 root root      8 Jul 31 05:15 xffdb_data01_new -> ../dm-14
lrwxrwxrwx 1 root root      8 Jul 31 05:15 xffdb_data02_new -> ../dm-13
lrwxrwxrwx 1 root root      7 Jul 31 05:15 xffdb_data06 -> ../dm-6
lrwxrwxrwx 1 root root      7 Jul 31 05:15 xffdb_data04 -> ../dm-5
lrwxrwxrwx 1 root root      7 Jul 31 05:15 xffdb_data03 -> ../dm-2
xffdb2:/dev/mapper$ls -l /dev/dm*
brw-rw---- 1 root disk     253,  0 Jul 26 12:24 /dev/dm-0
brw-rw---- 1 root disk     253,  1 Jul 26 12:24 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 10 Jul 31 05:22 /dev/dm-10
brw-rw---- 1 root disk     253, 11 Jul 31 04:28 /dev/dm-11
brw-rw---- 1 root disk     253, 12 Jul 31 04:55 /dev/dm-12
brw-rw---- 1 root disk     253, 13 Jul 31 05:15 /dev/dm-13
brw-rw---- 1 root disk     253, 14 Jul 31 05:15 /dev/dm-14
brw-rw---- 1 root disk     253, 15 Jul 26 12:24 /dev/dm-15
brw-rw---- 1 root disk     253, 16 Jul 26 12:24 /dev/dm-16
brw-rw---- 1 root disk     253, 17 Jul 31 04:21 /dev/dm-17
brw-rw---- 1 grid asmadmin 253,  2 Jul 31 05:15 /dev/dm-2
brw-rw---- 1 grid asmadmin 253,  3 Jul 31 04:59 /dev/dm-3
brw-rw---- 1 grid asmadmin 253,  4 Jul 31 05:22 /dev/dm-4
brw-rw---- 1 grid asmadmin 253,  5 Jul 31 05:15 /dev/dm-5
brw-rw---- 1 grid asmadmin 253,  6 Jul 31 05:15 /dev/dm-6
brw-rw---- 1 root disk     253,  7 Jul 31 04:28 /dev/dm-7
brw-rw---- 1 grid asmadmin 253,  8 Jul 31 05:22 /dev/dm-8
brw-rw---- 1 root disk     253,  9 Jul 31 04:28 /dev/dm-9

发现进一步访问,这三个盘权限全部还原成root:disk,导致grid无法正常访问,到这一部分基本上可以判断恢复过来的多路径下面的三个磁盘,当被访问之时,权限会发生改变,一般发生该问题,是由于这些设备没有被udev进行绑定导致,使用udev对这三个磁盘进行权限和所有组相关信息进行绑定之后,磁盘权限不再变化,v$asm_disk中显示信息也正常

[root@xffdb2 rules.d]# ls -l /dev/dm*
brw-rw---- 1 root disk     253,  0 Jul 31 05:26 /dev/dm-0
brw-rw---- 1 root disk     253,  1 Jul 31 05:26 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 10 Jul 31 05:26 /dev/dm-10
brw-rw---- 1 root disk     253, 11 Jul 31 05:26 /dev/dm-11
brw-rw---- 1 grid asmadmin 253, 12 Jul 31 05:26 /dev/dm-12
brw-rw---- 1 grid asmadmin 253, 13 Jul 31 05:26 /dev/dm-13
brw-rw---- 1 grid asmadmin 253, 14 Jul 31 05:26 /dev/dm-14
brw-rw---- 1 root disk     253, 15 Jul 31 05:26 /dev/dm-15
brw-rw---- 1 root disk     253, 16 Jul 31 05:26 /dev/dm-16
brw-rw---- 1 root disk     253, 17 Jul 31 05:26 /dev/dm-17
brw-rw---- 1 grid asmadmin 253,  2 Jul 31 05:26 /dev/dm-2
brw-rw---- 1 grid asmadmin 253,  3 Jul 31 05:26 /dev/dm-3
brw-rw---- 1 grid asmadmin 253,  4 Jul 31 05:26 /dev/dm-4
brw-rw---- 1 grid asmadmin 253,  5 Jul 31 05:26 /dev/dm-5
brw-rw---- 1 grid asmadmin 253,  6 Jul 31 05:26 /dev/dm-6
brw-rw---- 1 root disk     253,  7 Jul 31 05:26 /dev/dm-7
brw-rw---- 1 grid asmadmin 253,  8 Jul 31 05:26 /dev/dm-8
brw-rw---- 1 root disk     253,  9 Jul 31 05:26 /dev/dm-9
[root@xffdb2 rules.d]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Jul 31 05:26 control
lrwxrwxrwx 1 root root      8 Jul 31 05:26 mpathe -> ../dm-17
lrwxrwxrwx 1 root root      7 Jul 31 05:26 mpathf -> ../dm-7
lrwxrwxrwx 1 root root      8 Jul 31 05:26 xffdb_data01_new -> ../dm-14
lrwxrwxrwx 1 root root      8 Jul 31 05:26 xffdb_data02_new -> ../dm-13
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_data03 -> ../dm-2
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_data04 -> ../dm-5
lrwxrwxrwx 1 root root      8 Jul 31 05:26 xffdb_data05_new -> ../dm-12
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_data06 -> ../dm-6
lrwxrwxrwx 1 root root      8 Jul 31 05:26 xffdb_data07 -> ../dm-11
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_data08 -> ../dm-9
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_log1 -> ../dm-4
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_log2 -> ../dm-3
lrwxrwxrwx 1 root root      7 Jul 31 05:26 xffdb_vote2 -> ../dm-8
lrwxrwxrwx 1 root root      8 Jul 31 05:26 xffdb_vote3 -> ../dm-10
lrwxrwxrwx 1 root root      8 Jul 31 05:26 vgdata-lv_data -> ../dm-15
lrwxrwxrwx 1 root root      7 Jul 31 05:26 vg_xffdb2-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Jul 31 05:26 vg_xffdb2-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Jul 31 05:26 vg_xffdb2-LogVol02 -> ../dm-16
[root@xffdb2 rules.d]# 
SQL> /

GROUP_NUMBER DISK_NUMBER HEADER_STATUS                        STATE                    PATH
------------ ----------- ------------------------------------ ------------------------ -----------------------------
           0           0 MEMBER                               NORMAL                   /dev/mapper/xffdb_data01_new
           0           1 MEMBER                               NORMAL                   /dev/mapper/xffdb_data05_new
           0           2 MEMBER                               NORMAL                   /dev/mapper/xffdb_data03
           0           3 MEMBER                               NORMAL                   /dev/mapper/xffdb_data06
           0           4 MEMBER                               NORMAL                   /dev/mapper/xffdb_data04
           0           5 MEMBER                               NORMAL                   /dev/mapper/xffdb_data02_new
           3           1 MEMBER                               NORMAL                   /dev/mapper/xffdb_vote2
           2           0 MEMBER                               NORMAL                   /dev/mapper/xffdb_log1
           3           2 MEMBER                               NORMAL                   /dev/mapper/xffdb_vote3
           2           1 MEMBER                               NORMAL                   /dev/mapper/xffdb_log2

10 rows selected.

mount磁盘组成功

SQL>  alter diskgroup data_dg mount 
NOTE: cache registered group DATA_DG number=1 incarn=0x4178bd5e
NOTE: cache began mount (first) of group DATA_DG number=1 incarn=0x4178bd5e
NOTE: Assigning number (1,0) to disk (/dev/mapper/xffdb_data01_new)
NOTE: Assigning number (1,4) to disk (/dev/mapper/xffdb_data05_new)
NOTE: Assigning number (1,2) to disk (/dev/mapper/xffdb_data03)
NOTE: Assigning number (1,5) to disk (/dev/mapper/xffdb_data06)
NOTE: Assigning number (1,3) to disk (/dev/mapper/xffdb_data04)
NOTE: Assigning number (1,1) to disk (/dev/mapper/xffdb_data02_new)
Wed Jul 31 05:27:47 2024
NOTE: GMON heartbeating for grp 1
GMON querying group 1 at 46 for pid 29, osid 26738
NOTE: cache opening disk 0 of grp 1: DATA_DG_0000 path:/dev/mapper/xffdb_data01_new
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache opening disk 1 of grp 1: DATA_DG_0001 path:/dev/mapper/xffdb_data02_new
NOTE: cache opening disk 2 of grp 1: DATA_DG_0002 path:/dev/mapper/xffdb_data03
NOTE: cache opening disk 3 of grp 1: DATA_DG_0003 path:/dev/mapper/xffdb_data04
NOTE: cache opening disk 4 of grp 1: DATA_DG_0004 path:/dev/mapper/xffdb_data05_new
NOTE: cache opening disk 5 of grp 1: DATA_DG_0005 path:/dev/mapper/xffdb_data06
NOTE: cache mounting (first) external redundancy group 1/0x4178BD5E (DATA_DG)
Wed Jul 31 05:27:47 2024
* allocate domain 1, invalid = TRUE 
kjbdomatt send to inst 1
Wed Jul 31 05:27:47 2024
NOTE: attached to recovery domain 1
NOTE: cache recovered group 1 to fcn 0.1814063801
NOTE: redo buffer size is 256 blocks (1053184 bytes)
Wed Jul 31 05:27:47 2024
NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (DATA_DG)
NOTE: LGWR found thread 1 closed at ABA 12401.4517
NOTE: LGWR mounted thread 1 for diskgroup 1 (DATA_DG)
NOTE: LGWR opening thread 1 at fcn 0.1814063801 ABA 12402.4518
NOTE: cache mounting group 1/0x4178BD5E (DATA_DG) succeeded
NOTE: cache ending mount (success) of group DATA_DG number=1 incarn=0x4178bd5e
Wed Jul 31 05:27:47 2024
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
SUCCESS: diskgroup DATA_DG was mounted
SUCCESS:  alter diskgroup data_dg mount

重要提醒:手工直接对multipath设备权限所有者操作,当该设备被访问之时权限可能恢复成当初默认root:disk,对于这样的设备建议通过udev进行设置权限和所有者等信息

Oracle 23ai依旧支持bbed

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:Oracle 23ai依旧支持bbed

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

bbed作为oracle恢复利器,在最新版的oracle 23ai中,依旧可以编译成功,并且正常使用

[oracle@xifenfei db_1]$ make -f ./rdbms/lib/ins_rdbms.mk BBED=./bin/bbed ./bin/bbed

Linking BBED utility (bbed)
rm -f bin/bbed
/u01/app/oracle/product/23ai/db_1/bin/orald -o bin/bbed -m64 -z 
…………
blist` -ldl -lm   -L/u01/app/oracle/product/23ai/db_1/lib
[oracle@xifenfei db_1]$ 
[oracle@xifenfei db_1]$ 
[oracle@xifenfei db_1]$ bbed
Password: 

BBED: Release 2.0.0.0.0 - for Oracle Cloud and Engineered Systems on Sun Jul 28 01:50:44 2024

Copyright (c) 1982, 2024, Oracle and/or its affiliates.  All rights reserved.

************* !!! For Oracle Internal Use only !!! ***************

BBED> set filename '/u01/app/oracle/oradata/ORA23AI/system01.dbf'
        FILENAME        /u01/app/oracle/oradata/ORA23AI/system01.dbf

BBED> set blocksize 8192
        BLOCKSIZE       8192

BBED> set block 1
        BLOCK#          1

BBED> map
 File: /u01/app/oracle/oradata/ORA23AI/system01.dbf (0)
 Block: 1                                     Dba:0x00000000
------------------------------------------------------------
 Data File Header

 struct kcvfh, 1272 bytes                   @0       

 ub4 tailchk                                @8188    


BBED> p kcvfh
struct kcvfh, 1272 bytes                    @0       
   struct kcvfhbfh, 20 bytes                @0       
      ub1 type_kcbh                         @0        0x0b
      ub1 frmt_kcbh                         @1        0xa2
      ub2 wrp2_kcbh                         @2        0x0000
      ub4 rdba_kcbh                         @4        0x00000001
      ub4 bas_kcbh                          @8        0x00000000
      ub2 wrp_kcbh                          @12       0x0000
      ub1 seq_kcbh                          @14       0x01
      ub1 flg_kcbh                          @15       0x04 (KCBHFCKV)
      ub2 chkval_kcbh                       @16       0xed1e
      ub2 spare3_kcbh                       @18       0x0000
   struct kcvfhhdr, 76 bytes                @20      
      ub4 kccfhswv                          @20       0x00000000
      ub4 kccfhcvn                          @24       0x17040000
      ub4 kccfhdbi                          @28       0x8fe89c42
      text kccfhdbn[0]                      @32      O
      text kccfhdbn[1]                      @33      R
      text kccfhdbn[2]                      @34      A
      text kccfhdbn[3]                      @35      2
      text kccfhdbn[4]                      @36      3
      text kccfhdbn[5]                      @37      A
      text kccfhdbn[6]                      @38      I
      text kccfhdbn[7]                      @39       
      ub4 kccfhcsq                          @40       0x000002db
      ub4 kccfhfsz                          @44       0x00020300
      s_blkz kccfhbsz                       @48       0x00
      ub2 kccfhfno                          @52       0x0001
      ub2 kccfhtyp                          @54       0x0003
      ub4 kccfhacid                         @56       0x00000000
      ub4 kccfhcks                          @60       0x00000000
      text kccfhtag[0]                      @64       
      text kccfhtag[1]                      @65       
      text kccfhtag[2]                      @66       
      text kccfhtag[3]                      @67       
      text kccfhtag[4]                      @68       
      text kccfhtag[5]                      @69       
      text kccfhtag[6]                      @70       
      text kccfhtag[7]                      @71       
      text kccfhtag[8]                      @72       
      text kccfhtag[9]                      @73       
      text kccfhtag[10]                     @74       
      text kccfhtag[11]                     @75       
      text kccfhtag[12]                     @76       
      text kccfhtag[13]                     @77       
      text kccfhtag[14]                     @78       
      text kccfhtag[15]                     @79       
      text kccfhtag[16]                     @80       
      text kccfhtag[17]                     @81       
      text kccfhtag[18]                     @82       
      text kccfhtag[19]                     @83       
      text kccfhtag[20]                     @84       
      text kccfhtag[21]                     @85       
      text kccfhtag[22]                     @86       
      text kccfhtag[23]                     @87       
      text kccfhtag[24]                     @88       
      text kccfhtag[25]                     @89       
      text kccfhtag[26]                     @90       
      text kccfhtag[27]                     @91       
      text kccfhtag[28]                     @92       
      text kccfhtag[29]                     @93       
      text kccfhtag[30]                     @94       
      text kccfhtag[31]                     @95       
   ub4 kcvfhrdb                             @96       0x00000208
   struct kcvfhcrs, 8 bytes                 @100     
      ub4 kscnbas                           @100      0x00000008
      ub2 kscnwrp                           @104      0x8000
      ub2 kscnwrp2                          @106      0x0000
   ub4 kcvfhcrt                             @108      0x45fa1056
   ub4 kcvfhrlc                             @112      0x460f5f02
   struct kcvfhrls, 8 bytes                 @116     
      ub4 kscnbas                           @116      0x001f315f
      ub2 kscnwrp                           @120      0x8000
      ub2 kscnwrp2                          @122      0x0000
   ub4 kcvfhbti                             @124      0x00000000
   struct kcvfhbsc, 8 bytes                 @128     
      ub4 kscnbas                           @128      0x00000000
      ub2 kscnwrp                           @132      0x0000
      ub2 kscnwrp2                          @134      0x0000
   ub2 kcvfhbth                             @136      0x0000
   ub2 kcvfhsta                             @138      0x2004 (KCVFHOFZ)
   struct kcvfhckp, 36 bytes                @484     
      struct kcvcpscn, 8 bytes              @484     
         ub4 kscnbas                        @484      0x001f76c3
         ub2 kscnwrp                        @488      0x8000
         ub2 kscnwrp2                       @490      0x0000
      ub4 kcvcptim                          @492      0x460f5f87
      ub2 kcvcpthr                          @496      0x0001
      union u, 12 bytes                     @500     
         struct kcvcprba, 12 bytes          @500     
            ub4 kcrbaseq                    @500      0x00000001
            ub4 kcrbabno                    @504      0x0001ea34
            ub2 kcrbabof                    @508      0x0010
      ub1 kcvcpetb[0]                       @512      0x02
      ub1 kcvcpetb[1]                       @513      0x00
      ub1 kcvcpetb[2]                       @514      0x00
      ub1 kcvcpetb[3]                       @515      0x00
      ub1 kcvcpetb[4]                       @516      0x00
      ub1 kcvcpetb[5]                       @517      0x00
      ub1 kcvcpetb[6]                       @518      0x00
      ub1 kcvcpetb[7]                       @519      0x00
   ub4 kcvfhcpc                             @140      0x0000002d
   ub4 kcvfhrts                             @144      0x460f5edf
   ub4 kcvfhccc                             @148      0x0000002c
   struct kcvfhbcp, 36 bytes                @152     
      struct kcvcpscn, 8 bytes              @152     
         ub4 kscnbas                        @152      0x00000000
         ub2 kscnwrp                        @156      0x0000
         ub2 kscnwrp2                       @158      0x0000
      ub4 kcvcptim                          @160      0x00000000
      ub2 kcvcpthr                          @164      0x0000
      union u, 12 bytes                     @168     
         struct kcvcprba, 12 bytes          @168     
            ub4 kcrbaseq                    @168      0x00000000
            ub4 kcrbabno                    @172      0x00000000
            ub2 kcrbabof                    @176      0x0000
      ub1 kcvcpetb[0]                       @180      0x00
      ub1 kcvcpetb[1]                       @181      0x00
      ub1 kcvcpetb[2]                       @182      0x00
      ub1 kcvcpetb[3]                       @183      0x00
      ub1 kcvcpetb[4]                       @184      0x00
      ub1 kcvcpetb[5]                       @185      0x00
      ub1 kcvcpetb[6]                       @186      0x00
      ub1 kcvcpetb[7]                       @187      0x00
   ub4 kcvfhbhz                             @312      0x00000000
   struct kcvfhxcd, 16 bytes                @316     
      ub4 space_kcvmxcd[0]                  @316      0x00000000
      ub4 space_kcvmxcd[1]                  @320      0x00000000
      ub4 space_kcvmxcd[2]                  @324      0x00000000
      ub4 space_kcvmxcd[3]                  @328      0x00000000
   sword kcvfhtsn                           @332      0
   ub2 kcvfhtln                             @336      0x0006
   text kcvfhtnm[0]                         @338     S
   text kcvfhtnm[1]                         @339     Y
   text kcvfhtnm[2]                         @340     S
   text kcvfhtnm[3]                         @341     T
   text kcvfhtnm[4]                         @342     E
   text kcvfhtnm[5]                         @343     M
   text kcvfhtnm[6]                         @344      
   text kcvfhtnm[7]                         @345      
   text kcvfhtnm[8]                         @346      
   text kcvfhtnm[9]                         @347      
   text kcvfhtnm[10]                        @348      
   text kcvfhtnm[11]                        @349      
   text kcvfhtnm[12]                        @350      
   text kcvfhtnm[13]                        @351      
   text kcvfhtnm[14]                        @352      
   text kcvfhtnm[15]                        @353      
   text kcvfhtnm[16]                        @354      
   text kcvfhtnm[17]                        @355      
   text kcvfhtnm[18]                        @356      
   text kcvfhtnm[19]                        @357      
   text kcvfhtnm[20]                        @358      
   text kcvfhtnm[21]                        @359      
   text kcvfhtnm[22]                        @360      
   text kcvfhtnm[23]                        @361      
   text kcvfhtnm[24]                        @362      
   text kcvfhtnm[25]                        @363      
   text kcvfhtnm[26]                        @364      
   text kcvfhtnm[27]                        @365      
   text kcvfhtnm[28]                        @366      
   text kcvfhtnm[29]                        @367      
   ub4 kcvfhrfn                             @368      0x00000400
   struct kcvfhrfs, 8 bytes                 @372     
      ub4 kscnbas                           @372      0x00000000
      ub2 kscnwrp                           @376      0x0000
      ub2 kscnwrp2                          @378      0x0000
   ub4 kcvfhrft                             @380      0x00000000
   struct kcvfhafs, 8 bytes                 @384     
      ub4 kscnbas                           @384      0x00000000
      ub2 kscnwrp                           @388      0x0000
      ub2 kscnwrp2                          @390      0x0000
   ub4 kcvfhbbc                             @392      0x00000000
   ub4 kcvfhncb                             @396      0x00000000
   ub4 kcvfhmcb                             @400      0x00000000
   ub4 kcvfhlcb                             @404      0x00000000
   ub4 kcvfhbcs                             @408      0x00000000
   ub2 kcvfhofb                             @412      0x000a
   ub2 kcvfhnfb                             @414      0x000a
   ub4 kcvfhprc                             @416      0x45fa1051
   struct kcvfhprs, 8 bytes                 @420     
      ub4 kscnbas                           @420      0x00000001
      ub2 kscnwrp                           @424      0x0000
      ub2 kscnwrp2                          @426      0x0000
   struct kcvfhprfs, 8 bytes                @428     
      ub4 kscnbas                           @428      0x00000000
      ub2 kscnwrp                           @432      0x0000
      ub2 kscnwrp2                          @434      0x0000
   ub4 kcvfhtrt                             @444      0x00000000

以前类似文章:
bbed依旧支持Oracle 21c
Oracle 19C 依旧支持bbed
oracle 12.2依旧支持bbed
ORACLE 12C 依然支持 bbed
在win 64位平台上运行bbed(支持ORACLE 10g 11g 12c)

ORA-27154 ORA-27300 ORA-27301 ORA-27302故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-27154 ORA-27300 ORA-27301 ORA-27302故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

根据经验对系统的内核参数做了一些调整,结果导致数据库启动失败提示报ORA-27154 ORA-27300 ORA-27301 ORA-27302错误

ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates

ORA-27154


根据官方描述:Database Startup Fails with ORA-27300: OS system dependent operation:semget failed with status: 28 (Doc ID 949468.1),出现该问题原因可能是由于kernel.sem参数配置不合适当时,该库的processes配置为:20000,kernel.sem参数配置为:kernel.sem = 250 32000 100 128,参数说明:

kernel.sem = SEMMSL SEMMNS SEMOPM SEMMNI
SEMMSL - max semaphores per array
SEMMNS - max semaphores system wide
SEMOPM - max ops per semop call
SEMMNI - max number of arrays

理论上每个这样的配置最大值SEMMSL*SEMMNI=32000大于process的20000的设置,可是实际上控制每个信号集的信号数量没有达到250,而是只有156,通过ipcs命令可以看

[oracle@xifenfei ~]$ ipcs

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x00000000 32768      oracle     640        33554432   30                      
0x00000000 65537      oracle     640        4261412864 30                      
0xc2d167d0 98306      oracle     640        2097152    30                      
0x00000072 131075     root       444        1          1                       

------ Semaphore Arrays --------
key        semid      owner      perms      nsems   
0x450e15bd 0 	      root       666        1
0x0000cace 32769      root       666        1
0x358b172c 327683     oracle     660        104
0x9053d038 11075588   oracle     660        156
0x9053d039 11108357   oracle     660        156
0x9053d03a 11141126   oracle     660        156
0x9053d03b 11173895   oracle     660        156

从而使得SEMMSL*SEMMNI小于processes值,进而数据库启动报ORA-27154 ORA-27300 ORA-27301 ORA-27302,修改kernel.sem = 250 64000 128 256,数据库启动成功

awr创建snapshot等待library cache: mutex X

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:awr创建snapshot等待library cache: mutex X

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户一个11.2.0.4的库,在准备收集awr的时候发现没有snap id
awr


人工创建snapshot发现hang住了
awr_snap

查询该会话等待事件为:library cache: mutex X,查看以前mmon的子进程m000/1的trace信息

Trace file /u01/app/oracle/diag/rdbms/xff/xff/trace/xff_m000_6241.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1
System name:    Linux
Node name:      HIS
Release:        5.4.17-2102.201.3.el7uek.x86_64
Version:        #2 SMP Fri Apr 23 09:05:55 PDT 2021
Machine:        x86_64
VM name:        VMWare Version: 6
Instance name: xff
Redo thread mounted by this instance: 1
Oracle process number: 5714
Unix process pid: 6241, image: oracle@HIS (M000)


*** 2024-06-19 11:44:39.483
*** SESSION ID:(8709.38013) 2024-06-19 11:44:39.483
*** CLIENT ID:() 2024-06-19 11:44:39.483
*** SERVICE NAME:(SYS$BACKGROUND) 2024-06-19 11:44:39.483
*** MODULE NAME:(MMON_SLAVE) 2024-06-19 11:44:39.483
*** ACTION NAME:(Auto-Flush Slave Action) 2024-06-19 11:44:39.483

DDE rules only execution for: ORA 12751
----- START Event Driven Actions Dump ----
---- END Event Driven Actions Dump ----
----- START DDE Actions Dump -----
Executing SYNC actions
Executing ASYNC actions
----- START DDE Action: 'ORA_12751_DUMP' (Sync) -----
Runtime exceeded 900 seconds
Time limit violation detected at:
ksedsts()+465<-kspol_12751_dump()+145<-dbgdaExecuteAction()+1065<-dbgerRunAction()+109<-dbgerRunActions()
+4134<-dbgexPhaseII()+1873<-dbgexProcessError()+2680<-dbgeExecuteForError()+88<-dbgePostErrorKGE()+2136<-
dbkePostKGE_kgsf()+71<-kgeselv()+276<-kgesecl0()+139<-kgxWait()+1412<-kgxExclusive()+447<-
kglGetMutex()+140<-kglGetHandleReference()+69<-kglic0()+319<-kksIterCursorStat()+330<-kewrrtsq_rank_topsql()
+240<-kewrbtsq_build_topsql()+128<-kewrftsq_flush_topsql()+679<-kewrft_flush_table()+397<-
kewrftec_flush_table_ehdlcx()+766<-kewrfat_flush_all_tables()+1406<-kewrfos_flush_onesnap()+170
<-kewrfsc_flush_snapshot_c()+623<-kewrafs_auto_flush_slave()+769<-kebm_slave_main()+586<-ksvrdp()+1766
<-opirip()+674<-opidrv()+603<-sou2o()+103<-opimai_real()+250<-ssthrdmain()+265<-main()+201
<-__libc_start_main()+245
Current Wait Stack:
 0: waiting for 'library cache: mutex X'
    idn=0x644e2de0, value=0xf3a00000000, where=0x7c
    wait_id=1189 seq_num=1190 snap_id=1
    wait times: snap=3 min 0 sec, exc=3 min 0 sec, total=3 min 0 sec
    wait times: max=infinite, heur=15 min 3 sec
    wait counts: calls=16376 os=16376
    in_wait=1 iflags=0x15b2
There is at least one session blocking this session.
  Dumping 1 direct blocker(s):
    inst: 1, sid: 3898, ser: 47299
  Dumping final blocker:
    inst: 1, sid: 3898, ser: 47299
Wait State:
  fixed_waits=0 flags=0x22 boundary=(nil)/-1
Session Wait History:
    elapsed time of 0.000016 sec since current wait
 0: waited for 'library cache: mutex X'
    idn=0x644e2de0, value=0xf3a00000000, where=0x7c
    wait_id=1188 seq_num=1189 snap_id=1
    wait times: snap=12 min 2 sec, exc=12 min 2 sec, total=12 min 2 sec
    wait times: max=infinite
    wait counts: calls=65535 os=65535
    occurred after 0.327543 sec of elapsed time
 1: waited for 'db file sequential read'
    file#=0x2, block#=0x1a5b, blocks=0x1
    wait_id=1187 seq_num=1188 snap_id=1
    wait times: snap=0.000420 sec, exc=0.000420 sec, total=0.000420 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000251 sec of elapsed time
 2: waited for 'db file sequential read'
    file#=0x1, block#=0x82e6, blocks=0x1
    wait_id=1186 seq_num=1187 snap_id=1
    wait times: snap=0.000429 sec, exc=0.000429 sec, total=0.000429 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.001085 sec of elapsed time
 3: waited for 'db file sequential read'
    file#=0x2, block#=0x11344, blocks=0x1
    wait_id=1185 seq_num=1186 snap_id=1
    wait times: snap=0.000356 sec, exc=0.000356 sec, total=0.000356 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000008 sec of elapsed time
 4: waited for 'db file sequential read'
    file#=0x2, block#=0x19eb, blocks=0x1
    wait_id=1184 seq_num=1185 snap_id=1
    wait times: snap=0.000397 sec, exc=0.000397 sec, total=0.000397 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000044 sec of elapsed time
 5: waited for 'db file sequential read'
    file#=0x2, block#=0xb1659, blocks=0x1
    wait_id=1183 seq_num=1184 snap_id=1
    wait times: snap=0.000003 sec, exc=0.000003 sec, total=0.000003 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000010 sec of elapsed time
 6: waited for 'db file sequential read'
    file#=0x2, block#=0xb1658, blocks=0x1
    wait_id=1182 seq_num=1183 snap_id=1
    wait times: snap=0.000453 sec, exc=0.000453 sec, total=0.000453 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000009 sec of elapsed time
 7: waited for 'db file sequential read'
    file#=0x2, block#=0x19e1, blocks=0x1
    wait_id=1181 seq_num=1182 snap_id=1
    wait times: snap=0.000388 sec, exc=0.000388 sec, total=0.000388 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000017 sec of elapsed time
 8: waited for 'db file sequential read'
    file#=0x2, block#=0x19e2, blocks=0x1
    wait_id=1180 seq_num=1181 snap_id=1
    wait times: snap=0.000415 sec, exc=0.000415 sec, total=0.000415 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.004826 sec of elapsed time
 9: waited for 'db file sequential read'
    file#=0x2, block#=0x2ffc0c, blocks=0x1
    wait_id=1179 seq_num=1180 snap_id=1
    wait times: snap=0.000404 sec, exc=0.000404 sec, total=0.000404 sec
    wait times: max=infinite
    wait counts: calls=0 os=0
    occurred after 0.000007 sec of elapsed time
Sampled Session History of session 8709 serial 38013
---------------------------------------------------
The sampled session history is constructed by sampling
the target session every 1 second. The sampling process
captures at each sample if the session is in a non-idle wait,
an idle wait, or not in a wait. If the session is in a
non-idle wait then one interval is shown for all the samples
the session was in the same non-idle wait. If the
session is in an idle wait or not in a wait for
consecutive samples then one interval is shown for all
the consecutive samples. Though we display these consecutive
samples  in a single interval the session may NOT be continuously
idle or not in a wait (the sampling process does not know).

The history is displayed in reverse chronological order.

sample interval: 1 sec, max history 120 sec
---------------------------------------------------
  [118 samples,                                            11:42:39 - 11:44:39]
    waited for 'library cache: mutex X', seq_num: 1190
      p1: 'idn'=0x644e2de0
      p2: 'value'=0xf3a00000000
      p3: 'where'=0x7c
      time_waited: >= 120 sec (still in wait)
  [3 samples,                                              11:42:39 - 11:42:38]
    idle wait at each sample
---------------------------------------------------
Sampled Session History Summary:
  longest_non_idle_wait: 'library cache: mutex X'
  [118 samples, 11:42:39 - 11:44:39]
      time_waited: >= 120 sec (still in wait)
---------------------------------------------------
----- END DDE Action: 'ORA_12751_DUMP' (SUCCESS, 8 csec) -----
----- END DDE Actions Dump (total 8 csec) -----
KGX cleanup...
KGX Atomic Operation Log 0x1de44da670
 Mutex 0x1d113cf7c8(8709, 0) idn 2de0 oper EXCL(6)
 Library Cache uid 8709 efd 7 whr 49 slp 0
 oper=0 pt1=(nil) pt2=(nil) pt3=(nil)
 pt4=(nil) pt5=(nil) ub4=0
KGX cleanup...
KGX Atomic Operation Log 0x1de44da6c8
 Mutex 0x1de9468550(3898, 0) idn 644e2de0 oper GET_EXCL(5)
 Library Cache uid 8709 efd 7 whr 124 slp 16376
 oper=0 pt1=0x1de9468410 pt2=(nil) pt3=(nil)
 pt4=(nil) pt5=(nil) ub4=0
*** KEWRAFM1: Error=12751 encountered by kewrfteh
*** KEWRAFS: Error=12751 encountered by Auto Flush Slave.
KEBM: MMON slave action policy violation. kewrmafsa_; viol=1; err=12751

一般来说类似这样的系统自动任务被阻塞很可能是由于那种bug导致,找到相关mos文档: library cache: mutex x waits during AWR Flush High Cursor Scan (Doc ID 2382741.1),确认为:Bug 19294556 AWR Flush Waiting For Cursor Scan, Library Cache : Mutex X,目前没有好的workaround,而且在11.2.0.4基础版上没有对应的patch

truncate IDL_UB1$导致数据库open hang

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:truncate IDL_UB1$导致数据库open hang

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在一次数据库恢复中,发现IDL_UB1$表被truncate,然后数据库在open过程中会hang住,而且不报任何错误,这里通过试验进行重现.对于这类问题,以前有过类似处理测试:truncate IDL_UB1$恢复
试验数据库版本

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> select object_name,object_type,owner,object_id,data_object_id,
2   last_ddl_time from dba_objects where object_name like 'IDL_UB%';

OBJECT_NAME   OBJECT_TYPE         OWNER     OBJECT_ID DATA_OBJECT_ID LAST_DDL_TIM
------------- ------------------- -------- ---------- -------------- ------------
IDL_UB1$      TABLE               SYS             225            225 24-AUG-13
IDL_UB2$      TABLE               SYS             227            227 24-AUG-13

truncate IDL_UB1$表

SQL> truncate table IDL_UB1$;
truncate table IDL_UB1$
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 1090
Session ID: 125 Serial number: 5

alert日志报错信息

Sat May 25 20:23:11 2024
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x18] [PC:0x98DA2B7, hshuid()+273] [flags: 0x0, count: 1]
Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei/trace/xifenfei_ora_1090.trc  (incident=60156):
ORA-07445: exception encountered: core dump [hshuid()+273] [SIGSEGV] [ADDR:0x18] [PC:0x98DA2B7] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/xifenfei/xifenfei/incident/incdir_60156/xifenfei_ora_1090_i60156.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.

检查确认IDL_UB1$表被truncate成功

SQL> select count(1) from IDL_UB1$;

  COUNT(1)
----------
         0

SQL> select object_id,data_object_id,last_ddl_time from dba_objects where object_name like 'IDL_UB%';

 OBJECT_ID DATA_OBJECT_ID
---------- --------------

       225          87377

重启数据库hang住
open_hang


新会话查询信息
数据库open状态,但是无法执行ddl,无法执行exp/expdp等导出操作

[oracle@xifenfei ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 25 20:29:01 2024

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

--操作hang住
SQL> create table t1 as select * from dba_objects;

--exp操作hang住
[oracle@xifenfei ~]$ exp "'/ as sysdba'" tables=obj$

Export: Release 11.2.0.4.0 - Production on Sat May 25 20:39:03 2024

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

--expdp操作hang住
[oracle@xifenfei ~]$ expdp "'/ as sysdba'" tables=obj$

Export: Release 11.2.0.4.0 - Production on Sat May 25 20:39:35 2024

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

数据库在open 被hang的过程中等待事件信息
一直卡在db file sequential read等待事件上,而且等待的block信息一直不变,读取对象为IDL_UB2$

SQL> select username,program,event,P1,P2,P3 from v$session where wait_class#<>6 and username='SYS'
  2  AND SID NOT IN(SELECT DISTINCT SID FROM V$MYSTAT);

USERNAME                       PROGRAM
------------------------------ ------------------------------------------------
EVENT                                                                    P1
---------------------------------------------------------------- ----------
        P2         P3
---------- ----------
SYS                            sqlplus@xifenfei (TNS V1-V3)
db file sequential read                                                   1
      1531          1


SQL> /

USERNAME                       PROGRAM
------------------------------ ------------------------------------------------
EVENT                                                                    P1
---------------------------------------------------------------- ----------
        P2         P3
---------- ----------
SYS                            sqlplus@xifenfei (TNS V1-V3)
db file sequential read                                                   1
      1531          1


SQL> /

USERNAME                       PROGRAM
------------------------------ ------------------------------------------------
EVENT                                                                    P1
---------------------------------------------------------------- ----------
        P2         P3
---------- ----------
SYS                            sqlplus@xifenfei (TNS V1-V3)
db file sequential read                                                   1
      1531          1

--读取对象
SQL> SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, A.PARTITION_NAME
  2    FROM DBA_EXTENTS A
  3   WHERE FILE_ID = &FILE_ID
  4     AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
Enter value for file_id: 1
old   3:  WHERE FILE_ID = &FILE_ID
new   3:  WHERE FILE_ID = 1
Enter value for block_id: 1531
old   4:    AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1
new   4:    AND 1531 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
SEGMENT_TYPE       TABLESPACE_NAME                PARTITION_NAME
------------------ ------------------------------ ------------------------------
SYS
IDL_UB2$
TABLE              SYSTEM

trace数据库启动过程发现卡在select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by piece# 语句部分.

PARSING IN CURSOR #140737276517064 len=132 dep=2 uid=0 oct=3 lid=0 tim=1716640552642714 hv=4260389146 ad='80963c78' sqlid='cvn54b7yz0s8u'
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by piece#
END OF STMT
PARSE #140737276517064:c=17,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=3246118364,tim=1716640552642713
BINDS #140737276517064:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f73a0  bln=22  avl=03  flg=05
  value=1310
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7370  bln=24  avl=01  flg=05
  value=0
 Bind#2
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7340  bln=24  avl=06  flg=05
  value=184549376
EXEC #140737276517064:c=72,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=3246118364,tim=1716640552642819
FETCH #140737276517064:c=4,e=5,p=0,cr=1,cu=0,mis=0,r=0,dep=2,og=4,plh=3246118364,tim=1716640552642832
STAT #140737276517064 id=1 cnt=0 pid=0 pos=1 obj=225 op='TABLE ACCESS BY INDEX ROWID IDL_UB1$ (cr=1 pr=0 pw=0 time=5 us cost=3 size=44 card=2)'
STAT #140737276517064 id=2 cnt=0 pid=1 pos=1 obj=236 op='INDEX RANGE SCAN I_IDL_UB11 (cr=1 pr=0 pw=0 time=4 us cost=2 size=0 card=2)'
CLOSE #140737276517064:c=2,e=2,dep=2,type=0,tim=1716640552642858
=====================
PARSING IN CURSOR #140737276517064 len=135 dep=2 uid=0 oct=3 lid=0 tim=1716640552642876 hv=1115215392 ad='808c4c68' sqlid='c6awqs517jpj0'
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ where obj#=:1 and part=:2 and version=:3 order by piece#
END OF STMT
PARSE #140737276517064:c=12,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1319326155,tim=1716640552642875
BINDS #140737276517064:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f73a0  bln=22  avl=03  flg=05
  value=1310
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7370  bln=24  avl=01  flg=05
  value=0
 Bind#2
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7340  bln=24  avl=06  flg=05
  value=184549376
EXEC #140737276517064:c=73,e=72,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=1319326155,tim=1716640552642977
WAIT #140737276517064: nam='db file sequential read' ela= 10 file#=1 block#=1610 blocks=1 obj#=237 tim=1716640552643001
WAIT #140737276517064: nam='db file sequential read' ela= 9 file#=1 block#=1522 blocks=1 obj#=226 tim=1716640552643027
WAIT #140737276517064: nam='db file sequential read' ela= 8 file#=1 block#=1521 blocks=1 obj#=226 tim=1716640552643051
FETCH #140737276517064:c=81,e=80,p=3,cr=5,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643065
WAIT #140737276517064: nam='db file sequential read' ela= 9 file#=1 block#=1524 blocks=1 obj#=226 tim=1716640552643124
WAIT #140737276517064: nam='db file sequential read' ela= 7 file#=1 block#=1523 blocks=1 obj#=226 tim=1716640552643143
FETCH #140737276517064:c=40,e=40,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643150
WAIT #140737276517064: nam='db file sequential read' ela= 9 file#=1 block#=1526 blocks=1 obj#=226 tim=1716640552643216
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=1525 blocks=1 obj#=226 tim=1716640552643232
FETCH #140737276517064:c=36,e=36,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643239
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=8200 blocks=1 obj#=226 tim=1716640552643405
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=1527 blocks=1 obj#=226 tim=1716640552643420
FETCH #140737276517064:c=31,e=31,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643427
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=8202 blocks=1 obj#=226 tim=1716640552643583
WAIT #140737276517064: nam='db file sequential read' ela= 7 file#=1 block#=8201 blocks=1 obj#=226 tim=1716640552643600
FETCH #140737276517064:c=34,e=35,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643608
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=8204 blocks=1 obj#=226 tim=1716640552643766
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=8203 blocks=1 obj#=226 tim=1716640552643785
FETCH #140737276517064:c=35,e=35,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552643792
WAIT #140737276517064: nam='db file sequential read' ela= 8 file#=1 block#=8206 blocks=1 obj#=226 tim=1716640552644038
WAIT #140737276517064: nam='db file sequential read' ela= 8 file#=1 block#=8205 blocks=1 obj#=226 tim=1716640552644062
FETCH #140737276517064:c=49,e=49,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552644070
WAIT #140737276517064: nam='db file sequential read' ela= 7 file#=1 block#=8208 blocks=1 obj#=226 tim=1716640552644296
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=8207 blocks=1 obj#=226 tim=1716640552644313
FETCH #140737276517064:c=36,e=35,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552644320
WAIT #140737276517064: nam='db file sequential read' ela= 7 file#=1 block#=8209 blocks=1 obj#=226 tim=1716640552644479
FETCH #140737276517064:c=20,e=20,p=1,cr=3,cu=0,mis=0,r=1,dep=2,og=4,plh=1319326155,tim=1716640552644487
FETCH #140737276517064:c=2,e=1,p=0,cr=1,cu=0,mis=0,r=0,dep=2,og=4,plh=1319326155,tim=1716640552644545
STAT #140737276517064 id=1 cnt=9 pid=0 pos=1 obj=226 op='TABLE ACCESS BY INDEX ROWID IDL_CHAR$ (cr=20 pr=10 pw=0 time=53 us cost=3 size=21 card=1)'
STAT #140737276517064 id=2 cnt=9 pid=1 pos=1 obj=237 op='INDEX RANGE SCAN I_IDL_CHAR1 (cr=11 pr=1 pw=0 time=46 us cost=2 size=0 card=1)'
CLOSE #140737276517064:c=3,e=3,dep=2,type=0,tim=1716640552644579
=====================
PARSING IN CURSOR #140737276517064 len=132 dep=2 uid=0 oct=3 lid=0 tim=1716640552644609 hv=1684122946 ad='80924938' sqlid='39m4sx9k63ba2'
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ where obj#=:1 and part=:2 and version=:3 order by piece#
END OF STMT
PARSE #140737276517064:c=17,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=2317816222,tim=1716640552644609
BINDS #140737276517064:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f73a0  bln=22  avl=03  flg=05
  value=1310
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7370  bln=24  avl=01  flg=05
  value=0
 Bind#2
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7ffff35f7340  bln=24  avl=06  flg=05
  value=184549376
EXEC #140737276517064:c=92,e=93,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=2317816222,tim=1716640552644735
WAIT #140737276517064: nam='db file sequential read' ela= 9 file#=1 block#=1618 blocks=1 obj#=238 tim=1716640552644761
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=1530 blocks=1 obj#=227 tim=1716640552644778
WAIT #140737276517064: nam='db file sequential read' ela= 10 file#=1 block#=1529 blocks=1 obj#=227 tim=1716640552644799
FETCH #140737276517064:c=60,e=60,p=3,cr=5,cu=0,mis=0,r=1,dep=2,og=4,plh=2317816222,tim=1716640552644807
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=1532 blocks=1 obj#=227 tim=1716640552644924
WAIT #140737276517064: nam='db file sequential read' ela= 6 file#=1 block#=1531 blocks=1 obj#=227 tim=1716640552644940
FETCH #140737276517064:c=33,e=33,p=2,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=2317816222,tim=1716640552644947