假坏块引起恐慌

alert文件出现如下日志
Fri Sep 16 02:58:25 2011
Hex dump of (file 19, block 1444767) in trace file /opt/oracle/admin/ora9i/udump/ora9i_ora_24702.trc
Corrupt block relative dba: 0x04d60b9f (file 19, block 1444767)
Fractured block found during backing up datafile
Data in bad block:
type: 6 format: 2 rdba: 0x04d60b9f
last change scn: 0x0abf.56961827 seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x7c340601
check value in block header: 0xba17
computed block checksum: 0x6413
Reread of blocknum=1444767, file=/opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf. found valid data
因为这个是我们比较重要的业务服务器,如果出现坏块,不能及时处理,后果将不堪设想,因此我马上检查
先查看是什么对象:(结果是一个分区表的index,担心减少一半,index大不了重建,无大事)

SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, A.PARTITION_NAME
  FROM DBA_EXTENTS A
 WHERE FILE_ID = &FILE_ID
   AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;

然后检查是否真的坏块
方法一:dbv

[oracle@DB1 bdump]$ dbv file=/opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf blocksize=8192
DBVERIFY: Release 10.2.0.4.0 - Production on Fri Sep 16 09:15:11 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
DBVERIFY - Verification starting : FILE = /opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf
DBVERIFY - Verification complete
Total Pages Examined         : 2207744
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 2201581
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 3271
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 2892
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 1454468581 (2751.1454468581)

结论无坏块
方法二:bbed

[oracle@DB1 lib]$ bbed
Password:
BBED: Release 2.0.0.0.0 - Limited Production on Fri Sep 16 10:01:26 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
************* !!! For Oracle Internal Use only !!! ***************
BBED> set filename '/opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf'
        FILENAME        /opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf
BBED> set block 1444767
        BLOCK#          1444767
BBED> VERIFY
DBVERIFY - Verification starting
FILE = /opt/oracle/oradata/ora9i/TS_INDX_Base.005.dbf
BLOCK = 1444767
DBVERIFY - Verification complete
Total Blocks Examined         : 1
Total Blocks Processed (Data) : 0
Total Blocks Failing   (Data) : 0
Total Blocks Processed (Index): 1
Total Blocks Failing   (Index): 0
Total Blocks Empty            : 0
Total Blocks Marked Corrupt   : 0
Total Blocks Influx           : 0

再次证明无坏块
方法三:rman
RMAN> backup validate datafile 19;
结果查询V$BACKUP_CORRUPTION也无坏块
方法四:使用index查询
结果也正常,没报错,alert中无错误记录
发现alert报错,但是实际没有错误,查询相关资料发现
从日志中可以看到,提示Corrupt的block对应的dba为0x04d60b9f (file 19, block 1444767),data block的类型为6(6为trans data,所有的data和index blocks都是该类型)。Oracle发现block有可能corrupt后,进行了reread,结果为found valid data,说明数据块未损坏。Fractured block found,表示rman发现这个数据块正在被使用,这时rman会进行重新读取,如果再次失败,才认为是坏块。如果第二次尝试读取时成功,则表示数据完好,不会产生影响。此类信息在IO负载较高的情况下进行rman备份时比较容易出现。
查询系统发现,正好该时间段是系统io负载最高的时候

ORA-00600 [ktbdchk1: bad dscn] 解决

启动数据库报错
SQL> startup
ORACLE instance started.
Total System Global Area  167772160 bytes
Fixed Size                  1260720 bytes
Variable Size             150995792 bytes
Database Buffers            8388608 bytes
Redo Buffers                7127040 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
alert.log导错
Wed Aug 10 12:31:11 2011
Errors in file /u01/admin/xienfei/udump/xff_ora_8568.trc:
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], []
xff_ora_8568.trc内容
[ktbdchk] -- readers_dsz -- bad dscn
scn: 0x0000.b1e60c00scn: 0x0000.0011fca1
*** 2011-08-10 12:31:11.998
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], []
Current SQL statement for this session:
select ctime, mtime, stime from obj$ where obj# = :1
根据上面错误判断,错误的scn为b1e60c00,不是整个数据文件的scn错误
而应该是一个对象的scn错误,所以继续在xff_ora_8568.trc文件中查找b1e60c00
找到结果如下:
Block header dump:  0x0040007a
 Object id on Block? Y
 seg/obj: 0x12  csc: 0x00.b1e60c00  itc: 1  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0008.02a.000001d9  0x00802341.01bb.04  ----    1  fsc 0x0000.0011ae7c
data_block_dump,data header at 0x20fd6044
===============
tsiz: 0x1fb8
hsiz: 0xea
pbl: 0x20fd6044
bdba: 0x0040007a
     76543210
flag=--------
ntab=1
nrow=108
frre=-1
fsbo=0xea
fseo=0x453
avsp=0x369
tosp=0x369
0xe:pti[0]      nrow=108        offs=0
根据这个提示,发现dba为:0040007a的对象异常,查找对应的file_id,block
SQL> SELECT DBMS_UTILITY.data_block_address_file (TO_NUMBER ('40007a', 'XXXXXXXX')) file_id,
  2          DBMS_UTILITY.data_block_address_block (TO_NUMBER ('40007a', 'XXXXXXXX')) block_id
  3    FROM DUAL;
   FILE_ID   BLOCK_ID
---------- ----------
         1        122
使用bbed查看file=1,block=122的scn情况
BBED> p ktbbh
struct ktbbh, 48 bytes                      @20
   ub1 ktbbhtyp                             @20       0x01 (KDDBTDATA)
   union ktbbhsid, 4 bytes                  @24
      ub4 ktbbhsg1                          @24       0x00000012
      ub4 ktbbhod1                          @24       0x00000012
   struct ktbbhcsc, 8 bytes                 @28
      ub4 kscnbas                           @28       0xb1e60c00
      ub2 kscnwrp                           @32       0x0000
   b2 ktbbhict                              @36       1
   ub1 ktbbhflg                             @38       0x02 (NONE)
   ub1 ktbbhfsl                             @39       0x00
   ub4 ktbbhfnx                             @40       0x00000000
   struct ktbbhitl[0], 24 bytes             @44
      struct ktbitxid, 8 bytes              @44
         ub2 kxidusn                        @44       0x0008
         ub2 kxidslt                        @46       0x002a
         ub4 kxidsqn                        @48       0x000001d9
      struct ktbituba, 8 bytes              @52
         ub4 kubadba                        @52       0x00802341
         ub2 kubaseq                        @56       0x01bb
         ub1 kubarec                        @58       0x04
      ub2 ktbitflg                          @60       0x0001 (NONE)
      union _ktbitun, 2 bytes               @62
         b2 _ktbitfsc                       @62       0
         ub2 _ktbitwrp                      @62       0x0000
      ub4 ktbitbas                          @64       0x0011ae7c
果然发现scn为0xb1e60c00,现在把其修改为:0x00124ac6(注意规则,一般linux下都是倒序)
BBED> set offset 28
        OFFSET          28
BBED> m /x c64a1200
BBED-00209: invalid number (c64a1200)
小技巧,一次性修改报错,尝试一次修改一点
BBED> m /x c64a
 File: /u01/oradata/xienfei/system01.dbf (0)
 Block: 122              Offsets:   28 to   43           Dba:0x00000000
------------------------------------------------------------------------
 c64ae6b1 00000000 01000200 00000000
 <32 bytes per line>
BBED> set offset +2
        OFFSET          30
BBED> m /x 1200
 File: /u01/oradata/xienfei/system01.dbf (0)
 Block: 122              Offsets:   30 to   45           Dba:0x00000000
------------------------------------------------------------------------
 12000000 00000100 02000000 00000800
 <32 bytes per line>
BBED> set offset -2
        OFFSET          28
BBED> dump
 File: /u01/oradata/xienfei/system01.dbf (0)
 Block: 122              Offsets:   28 to   43           Dba:0x00000000
------------------------------------------------------------------------
 c64a1200 00000000 01000200 00000000
 <32 bytes per line>
BBED> sum apply
Check value for File 0, Block 122:
current = 0x3a4e, required = 0x3a4e
SQL> startup
ORACLE instance started.
Total System Global Area  167772160 bytes
Fixed Size                  1260720 bytes
Variable Size             150995792 bytes
Database Buffers            8388608 bytes
Redo Buffers                7127040 bytes
Database mounted.
Database opened.

cursor: pin S事件

A session waits for “cursor: pin S” when it wants a specific mutex in S (share) mode on a specific cursor and there is no concurrent X holder but it could not acquire that mutex immediately. This may seem a little strange as one might question why there should be any form of wait to get a mutex which has no session holding it in an incompatible mode. The reason for the wait is that in order to acquire the mutex in S mode (or release it) the session has to increment (or decrement) the mutex reference count and this requires an exclusive atomic update to the mutex structure itself. If there are concurrent sessions trying to make such an update to the mutex then only one session can actually increment (or decrement) the reference count at a time. A wait on “cursor: pin S” thus occurs if a session cannot make that atomic change immediately due to other concurrent requests.
Mutexes are local to the current instance in RAC environments.
Oracle10g中引用的mutexes机制一定程度的替代了library cache pin,其结构更简单,get&set的原子操作更快捷。
它相当于,每个child cursor下面都有一个mutexes这样的简单内存结构,当有session要执行该SQL而需要pin cursor操作的时候,session只需要以shared模式set这个内存位+1,表示session获得该mutex的shared mode lock.可以有很多session同时具有这个mutex的shared mode lock;但在同一时间,只能有一个session在操作这个mutext +1或者-1。+1 -1的操作是排它性的原子操作。如果因为session并行太多,而导致某个session在等待其他session的mutext +1/-1操作,则该session要等待cursor: pin S等待事件。
当看到系统有很多session等待cursor: pin S事件的时候,要么是CPU不够快,要么是某个SQL的并行执行次数太多了而导致在child cursor上的mutex操作争用。如果是Capacity的问题,则可以升级硬件。如果是因为SQL的并行太多,则要么想办法降低该SQL执行次数,要么将该SQL复制成N个其它的SQL。
select /*SQL 1*/object_name from t where object_id=?
select /*SQL 2*/object_name from t where object_id=?
select /*SQL …*/object_name from t where object_id=?
select /*SQL N*/object_name from t where object_id=?
这样就有了N个SQL Cursor,N个Mutex内存结构,就将争用分散开来,类似partition的作用了

Oracle常用诊断事件清单

事件 说明 例子
Event 10013 – Monitor Transaction Recovery 在Startup时跟踪事务恢复 ALTER SESSION SET EVENTS ’10013 trace name context forever, level 1′;
Event 10015 – Dump Undo Segment Headers- 在事务恢复后做Dump回退段头信息 ALTER SESSION SET EVENTS ’10015 trace name context forever, level 1′;
Event 10032 – Dump Sort Statistics Dump排序的统计信息 ALTER SESSION SET EVENTS ’10032 trace name context forever, level 10′;
Event 10033 – Dump Sort Intermediate Run Statistics 排序过程中,内存排序区和临时表空间的交互情况 ALTER SESSION SET EVENTS ’10033 trace name context forever, level 10′;
Event 10045 – Trace Free List Management Operations FREELIST的管理操作 ALTER SESSION SET EVENTS ’10045 trace name context forever, level 1′;
Event 10046 – Enable SQL Statement Trace 跟踪SQL,有执行计划,邦定变量和等待的统计信息,level 12最详细。 ALTER SESSION SET EVENTS ’10046 trace name context forever, level 12′;
LEVEL定义如下:
1:SQL 语句,执行计划和执行状态
4:1的内容加上绑定变量信息
8:1的信息加上等待事件信息
12:1+4+8
Event 10053 – Dump Optimizer Decisions 在分析SQL语句时,Dump出优化器所做的选择,级别level 1最详细 ALTER SESSION SET EVENTS ’10053 trace name context forever, level 1′;
LEVEL定义如下:
1:状态和估算信息
2:只显示估算信息
Event 10060 – Dump Predicates DUMP SQL语句中的断语信息。需要在需要DUMP的用户下创建以下表
CREATE TABLE kkoipt_table
(c1 INTEGER,
c2 VARCHAR2(80));
断语信息会写入该表
ALTER SESSION SET EVENTS ’10060 trace name context forever, level 1′;
Event 10065 – Restrict Library Cache Dump Output for State Object Dumps 限制对象状态DUMP的时候LIBRARY CACHE信息的详细程度
1 Address of library object only
2 As level 1 plus library object lock details
3 As level 2 plus library object handle and library object
缺省是LEVEL 3
ALTER SESSION SET EVENTS ’10065 trace name context forever, level level’;
Event 10079 – Dump SQL*Net Statistics- Dump SQL*NeT的统计信息 ALTER SESSION SET EVENTS ’10079 trace name context forever, level 2′;
Event 10081 – Trace High Water Mark Changes HWM的改变 ALTER SESSION SET EVENTS ’10081 trace name context forever, level 1′;
Event 10104 – Dump Hash Join Statistics HASH JOIN的统计信息 ALTER SESSION SET EVENTS ’10104 trace name context forever, level 10′;
Event 10128 – Dump Partition Pruning Information 分区表调整信息 ALTER SESSION SET EVENTS ’10128 trace name context forever, level level’;
Level取值:
1   Dump pruning descriptor for each partitioned object
0×0002 Dump partition iterators
0×0004 Dump optimizer decisions about partition-wise joins
0×0008 Dump ROWID range scan pruning information
在9.0.1或者后面的版本,在level 2后还需要建立如下的表:
CREATE TABLE kkpap_pruning
(
partition_count    NUMBER,
iterator           VARCHAR2(32),
partition_level    VARCHAR2(32),
order_pt         VARCHAR2(12),
call_time        VARCHAR2(12),
part#             NUMBER,
subp#              NUMBER,
abs#               NUMBER
);
事件 说明 例子
Event 10200 – Dump Consistent Reads DUMP一致读的信息 ALTER SESSION SET EVENTS ’10200 trace name context forever, level 1′;
Event 10201 – Dump Consistent Read Undo Application DUMP一致性读涉及UNDO信息的内容 ALTER SESSION SET EVENTS ’10201 trace name context forever, level 1′;
Event 10220 – Dump Changes to Undo Header Dump出Undo头信息的改变 ALTER SESSION SET EVENTS ’10220 trace name context forever, level 1′;
Event 10221 – Dump Undo Changes Dump Undo的改变 ALTER SESSION SET EVENTS ’10221 trace name context forever, level 7′;
Event 10224 – Dump Index Block Splits / Deletes 索引块的分裂和D删除信息 ALTER SESSION SET EVENTS ’10224 trace name context forever, level 1′;
Event 10225 – Dump Changes to Dictionary Managed Extents DUMP字段管理的扩展变化 ALTER SESSION SET EVENTS ’10225 trace name context forever, level 1′;
Event 10231 全表扫描时跳过坏块,在有坏块的情况下做数据拯救时很有用 ALTER SYSTEM SET EVENTS ’10231 trace name context forever,level 10′;
Event 10241 – Dump Remote SQL Execution 远程SQL语句的执行信息 ALTER SESSION SET EVENTS ’10241 trace name context forever, level 1′;
Event 10246 – Trace PMON Process 跟踪PMON进程 只能修改参数,不能用ALTER SYSTEM
event = “10246 trace name context forever, level 1″
Event 10248 – Trace Dispatcher Processes 跟踪DISPATCHER的工作情况 event = “10248 trace name context forever, level 10″
Event 10249 – Trace Shared Server (MTS) Processes- 跟踪共享服务器的工作情况 event = “10249 trace name context forever, level 10″
Event 10270 – Debug Shared Cursors 跟踪共享CURSORS的情况 event = “10270 trace name context forever, level 10″
Event 10299 – Debug Prefetching 跟踪表数据块和索引数据块的PREFETCHING event = “10299 trace name context forever, level 1″
Event 10357 – Debug Direct Path ALTER SESSION SET EVENTS ’10357 trace name context forever, level 1′;
Event 10390 – Dump Parallel Execution Slave Statistics 跟踪并行操作中的SLAVE的状态 ALTER SESSION SET EVENTS ’10390 trace name context forever, level 1;
Event 10391-Dump Parallel Execution Granule Allocation 跟踪并行操作的粒度 ALTER SESSION SET EVENTS ’10391 trace name context forever, level 2′;
Event 10393 – Dump Parallel Execution Statistics 跟踪并行操作的状态(每个SLAVE单独列出状态) ALTER SESSION SET EVENTS ’10393 trace name context forever, level 1′;
Event 10500 – Trace SMON Process 跟踪SMON进程 event = “10500 trace name context forever, level 1″
Event 10608 – Trace Bitmap Index Creation 跟踪BITMAP索引创建的详细过程 ALTER SESSION SET EVENTS ’10608 trace name context forever, level 10′;
Event 10704 – Trace Enqueues 跟踪锁的使用情况 ALTER SESSION SET EVENTS ’10704 trace name context forever, level 1′;
Event 10706 – Trace Global Enqueue Manipulation 跟踪全局锁的使用情况 ALTER SESSION SET EVENTS ’10706 trace name context forever, level 1′;
Event 10708 – Trace RAC Buffer Cache 跟踪RAC环境下的BUFFER CACHE ALTER SESSION SET EVENTS ’10708 trace name context forever, level 10′;
事件 说明 例子
Event 10710 – Trace Bitmap Index Access 跟踪位图索引的访问情况 ALTER SESSION SET EVENTS ’10710 trace name context forever, level 1′;
Event 10711 – Trace Bitmap Index Merge Operation 跟踪位图索引合并操作 ALTER SESSION SET EVENTS ’10711 trace name context forever, level 1′;
Event 10712 – Trace Bitmap Index OR Operation 跟踪位图索引或操作情况 ALTER SESSION SET EVENTS ’10712 trace name context forever, level 1′;
Event 10713 – Trace Bitmap Index AND Operation 跟踪位图索引与操作 ALTER SESSION SET EVENTS ’10713 trace name context forever, level 1′;
Event 10714 – Trace Bitmap Index MINUS Operation 跟踪位图索引minus操作 ALTER SESSION SET EVENTS ’10714 trace name context forever, level 1′;
Event 10715 – Trace Bitmap Index Conversion to ROWIDs Operation 跟踪位图索引转换ROWID操作 ALTER SESSION SET EVENTS ’10715 trace name context forever, level 1′;
Event 10716 – Trace Bitmap Index Compress/Decompress 跟踪位图索引压缩和解压缩情况 ALTER SESSION SET EVENTS ’10716 trace name context forever, level 1′;
Event 10717 – Trace Bitmap Index Compaction ALTER SESSION SET EVENTS ’10717 trace name context forever, level 1′;
Event 10719 – Trace Bitmap Index DML 跟踪位图索引列的DML操作(引起位图索引改变的DML操作) ALTER SESSION SET EVENTS ’10719 trace name context forever, level 1′;
Event 10730 – Trace Fine Grained Access Predicates 跟踪细粒度审计的断语 ALTER SESSION SET EVENTS ’10730 trace name context forever, level 1′;
Event 10731 – Trace CURSOR Statements 跟踪CURSOR的语句情况 ALTER SESSION SET EVENTS ’10731 trace name context forever, level level’;
LEVEL定义
1     Print parent query and subquery
2     Print subquery only
Event 10928 – Trace PL/SQL Execution 跟踪PL/SQL执行情况 ALTER SESSION SET EVENTS ’10928 trace name context forever, level 1′;
Event 10938 – Dump PL/SQL Execution Statistics 跟踪PL/SQL执行状态。使用前需要执行rdbms/admin下的tracetab.sql ALTER SESSION SET EVENTS ’10938 trace name context forever, level 1′;
flush_cache 刷新BUFFER CACHE ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;
DROP_SEGMENTS 手工删除临时段。当这些临时段无法自动清除的时候可以手工清除 alter session set events ‘immediate trace name DROP_SEGMENTS level ts#+1′;
ts#是指要删除临时段的表空间的ts#

ORA-600 [LibraryCacheNotEmptyOnClose] on shutdown

一、现象
alert.log中记录
Mon May 9 19:56:10 2011(shutdown 数据库过程中)
Errors in file /opt/oracle/admin/xunzhi/udump/xunzhi_ora_328.trc:
ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [], [], [], [], [], []
trace中记录

*** 2011-05-09 19:56:10.843
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [], [], [], [], [], []
Current SQL information unavailable - no session.
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksedst()+31          call     ksedst1()            000000000 ? 000000001 ?
                                                   7FFFB5A19840 ? 7FFFB5A198A0 ?
                                                   7FFFB5A197E0 ? 000000000 ?
ksedmp()+610         call     ksedst()             000000000 ? 000000001 ?
                                                   7FFFB5A19840 ? 7FFFB5A198A0 ?
                                                   7FFFB5A197E0 ? 000000000 ?
ksfdmp()+21          call     ksedmp()             000000003 ? 000000001 ?
                                                   7FFFB5A19840 ? 7FFFB5A198A0 ?
                                                   7FFFB5A197E0 ? 000000000 ?
kgerinv()+161        call     ksfdmp()             000000003 ? 000000001 ?
                                                   7FFFB5A19840 ? 7FFFB5A198A0 ?
                                                   7FFFB5A197E0 ? 000000000 ?
kgeasnmierr()+163    call     kgerinv()            0068966E0 ? 01EFD6610 ?
                                                   7FFFB5A198A0 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000000 ?
kglshu()+757         call     kgeasnmierr()        0068966E0 ? 01EFD6610 ?
                                                   7FFFB5A198A0 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000001 ?
kqlnfy()+468         call     kglshu()             0068966E0 ? 000000000 ?
                                                   7FFFB5A198A0 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000001 ?
kscnfy()+587         call     kqlnfy()             000000018 ? 000000000 ?
                                                   7FFFB5A198A0 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000001 ?
ksmshu()+269         call     kscnfy()             000000018 ? 000000000 ?
                                                   000000000 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000001 ?
opistp_real()+1052   call     ksmshu()             000000018 ? 000000000 ?
                                                   000000000 ? 7FFFB5A197E0 ?
                                                   000000000 ? 000000001 ?
opistp()+309         call     opistp_real()        000000031 ? 000000002 ?
                                                   7FFFB5A1E560 ? 000000000 ?
                                                   000000000 ? 000000001 ?
opiodr()+984         call     opistp()             000000031 ? 000000002 ?
                                                   7FFFB5A1E560 ? 000000000 ?
                                                   000000000 ? 000000001 ?
ttcpip()+1012        call     opiodr()             000000031 ? 000000002 ?
                                                   7FFFB5A1E560 ? 000000000 ?
                                                   0059C02A8 ? 000000001 ?
opitsk()+1322        call     ttcpip()             00689E3B0 ? 000000001 ?
                                                   7FFFB5A1E560 ? 000000000 ?
                                                   7FFFB5A1E058 ? 7FFFB5A1E6C8 ?
opiino()+1026        call     opitsk()             000000003 ? 000000000 ?
                                                   7FFFB5A1E560 ? 000000001 ?
                                                   000000000 ? 4E5000B00000000 ?
opiodr()+984         call     opiino()             00000003C ? 000000004 ?
                                                   7FFFB5A1F728 ? 000000001 ?
                                                   000000000 ? 4E5000B00000000 ?
opidrv()+547         call     opiodr()             00000003C ? 000000004 ?
                                                   7FFFB5A1F728 ? 000000000 ?
                                                   0059C0460 ? 4E5000B00000000 ?
sou2o()+114          call     opidrv()             00000003C ? 000000004 ?
                                                   7FFFB5A1F728 ? 000000000 ?
                                                   0059C0460 ? 4E5000B00000000 ?
opimai_real()+163    call     sou2o()              7FFFB5A1F700 ? 00000003C ?
                                                   000000004 ? 7FFFB5A1F728 ?
                                                   0059C0460 ? 4E5000B00000000 ?
main()+116           call     opimai_real()        000000002 ? 7FFFB5A1F790 ?
                                                   000000004 ? 7FFFB5A1F728 ?
                                                   0059C0460 ? 4E5000B00000000 ?
__libc_start_main()  call     main()               000000002 ? 7FFFB5A1F790 ?
+244                                               000000004 ? 7FFFB5A1F728 ?
                                                   0059C0460 ? 4E5000B00000000 ?
_start()+41          call     __libc_start_main()  000723088 ? 000000002 ?
                                                   7FFFB5A1F8E8 ? 000000000 ?
                                                   0059C0460 ? 000000002 ?

二、问题展示形式
ORA-600 [LibraryCacheNotEmptyOnClose] is reported in the alert.log on shutdown
The trace file shows the following call stack trace and will also include a System State:
kglshu kqlnfy kscnfy ksmshu opistp_real opistp opiodr ttcpip opitsk opiino opiodr opidrv sou2o opimai_real main libc_start_main
三、问题原因及其后果
This is a bug in that an ORA-600 error is reported when it is found during shutdown, after database close, that there are still objects in the library cache. It does not indicate any damage or a problem in the system.
Ignore the error as it just indicates that there are some items in the library cache when closing down the instance. The error itself occurs AFTER the database close and dismount stages so only affects the instance shutdown itself. Datafiles have been closed cleanly.
四、影响版本/修复

Warning: log write time 560ms, size 3KB

1、发现现象
xunzhi_lgwr_27974.trc文件中
*** 2011-09-14 09:30:04.947
Warning: log write time 500ms, size 0KB
*** 2011-09-14 09:33:41.058
Warning: log write time 560ms, size 3KB
*** 2011-09-14 09:48:55.597
Warning: log write time 820ms, size 0KB
*** 2011-09-14 10:00:50.477
Warning: log write time 870ms, size 0KB
2、Metalink描述[ID 601316.1]
Changes
The problem surfaced after upgrading to 10.2.0.4.
Cause
The above warning messages has been introduced in 10.2.0.4 patchset. This warning message will be generated only if the log write time is more than 500 ms and it will be written to the lgwr trace file .
Solution
These messages are very much expected in 10.2.0.4 database in case the log write is more than 500 ms. This is a warning which means that the write process is not as fast as it intented to be .So probably you need to check if the disk is slow or not or for any potential OS causes. If everything looks fine at the hardware level or OS level then you can safely ignore these messages. The trace file can easily be deleted or truncated.
3、网上解决方案
the error may be suppressed by setting Event 10468 level 4.
alter system set events ‘10468 trace name context level 4’;
取消该trace命令为:
ALTER SYSTEM SET EVENTS ‘10468 trace name context off’;

mysql主从切换

1、修改配置文件
read-only=1(主库)
#read-only=1(备库)
2、查询从库状态
mysql> show processlist ;
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
| Id | User | Host | db | Command | Time | State | Info |
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
| 1 | root | localhost | ecp | Query | 0 | NULL | show processlist |
| 4 | system user | | NULL | Connect | 2 | Waiting for master to send event | NULL |
| 5 | system user | | NULL | Connect | 2 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL |
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
3 rows in set (0.00 sec)
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.2
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 107
Relay_Log_File: replicate.000007
Relay_Log_Pos: 253
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 107
Relay_Log_Space: 549
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2
1 row in set (0.00 sec)
3、查询主库状态
mysql> show processlist;
+—-+——+——————-+——+————-+——+———————————————————————–+——————+
| Id | User | Host | db | Command | Time | State | Info |
+—-+——+——————-+——+————-+——+———————————————————————–+——————+
| 1 | root | localhost | ecp | Query | 0 | NULL | show processlist |
| 2 | repl | 192.168.1.4:17948 | NULL | Binlog Dump | 6 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL |
+—-+——+——————-+——+————-+——+———————————————————————–+——————+
2 rows in set (0.00 sec)
mysql> show master status \G
*************************** 1. row ***************************
File: mysql-bin.000004
Position: 107
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)
4、从库操作
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW PROCESSLIST;
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
| Id | User | Host | db | Command | Time | State | Info |
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
| 1 | root | localhost | ecp | Query | 0 | NULL | SHOW PROCESSLIST |
| 5 | system user | | NULL | Connect | 256 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL |
+—-+————-+———–+——+———+——+—————————————————————————–+——————+
2 rows in set (0.00 sec)
确保状态为:has read all relay log
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.1.2
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 107
Relay_Log_File: replicate.000007
Relay_Log_Pos: 253
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 107
Relay_Log_Space: 549
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2
1 row in set (0.00 sec)
5、查询主库状态
mysql> show master status \G
*************************** 1. row ***************************
File: mysql-bin.000004
Position: 107
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)
6、从库变主库
mysql> STOP SLAVE;
Query OK, 0 rows affected (0.00 sec)
mysql> RESET MASTER;
Query OK, 0 rows affected (0.02 sec)
mysql> RESET SLAVE;
Query OK, 0 rows affected (0.03 sec)
mysql> show master status \G
*************************** 1. row ***************************
File: mysql-bin.000001
Position: 107
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)
7、主库变从库
mysql> RESET MASTER;
Query OK, 0 rows affected (0.06 sec)
mysql> RESET SLAVE;
Query OK, 0 rows affected (0.03 sec)
mysql> CHANGE MASTER TO
-> MASTER_HOST=’192.168.1.4′,
-> MASTER_USER=’repl’,
-> MASTER_PASSWORD=’xifenfei’,
-> MASTER_LOG_FILE=’mysql-bin.000001′,
-> MASTER_LOG_POS=107;
Query OK, 0 rows affected (0.05 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
8、重启主和从库
[root@localhost mysql]# service mysql restart
Shutting down MySQL….[ OK ]
Starting MySQL…………….[ OK ]
9、检查主从是否都正常
主库
SHOW PROCESSLIST;
show master status \G
从库
SHOW PROCESSLIST;
start slave;
show slave status \G
如果有错误,根据错误提示,解决问题
创建主从复制,请见使用xtrabackup 配置主从服务器

rman 实现在线传输表空间(>=10g)

rman操作
RMAN> transport tablespace O_ORACLE
2> tablespace destination ‘F:\rmanbackup\td’
3> auxiliary destination ‘F:\rmanbackup\ad’;
RMAN-05026: 警告: 假定以下表空间集适用于指定的时间点
表空间列表要求具有 UNDO 段
表空间 SYSTEM
表空间 UNDOTBS1
使用 SID=’enEv’ 创建自动实例
供自动实例使用的初始化参数:
db_name=XFF
db_unique_name=enEv_tspitr_XFF
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=F:\rmanbackup\ad
log_archive_dest_1=’location=F:\rmanbackup\ad’
#No auxiliary parameter file used
启动自动实例 XFF
Oracle 实例已启动
系统全局区域总计 292933632 字节
Fixed Size 1374164 字节
Variable Size 100665388 字节
Database Buffers 184549376 字节
Redo Buffers 6344704 字节
自动实例已创建
对恢复集表空间运行 TRANSPORT_SET_CHECK
TRANSPORT_SET_CHECK 已成功完成
内存脚本的内容:
{
# set requested point in time
set until scn 10903430793309;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone ‘alter database mount clone database’;
# archive current online log
sql ‘alter system archive log current’;
}
正在执行内存脚本
正在执行命令: SET until clause
启动 restore 于 12-9月 -11
分配的通道: ORA_AUX_DISK_1
通道 ORA_AUX_DISK_1: SID=59 设备类型=DISK
通道 ORA_AUX_DISK_1: 正在开始还原数据文件备份集
通道 ORA_AUX_DISK_1: 正在还原控制文件
通道 ORA_AUX_DISK_1: 正在读取备份片段 F:\RMANBACKUP\9_12_0HMMD2S8_1_1
通道 ORA_AUX_DISK_1: 段句柄 = F:\RMANBACKUP\9_12_0HMMD2S8_1_1 标记 = TAG20110912
T215425
通道 ORA_AUX_DISK_1: 已还原备份片段 1
通道 ORA_AUX_DISK_1: 还原完成, 用时: 00:00:01
输出文件名=F:\RMANBACKUP\AD\XFF\CONTROLFILE\O1_MF_76W4C7XM_.CTL
完成 restore 于 12-9月 -11
sql 语句: alter database mount clone database
sql 语句: alter system archive log current
内存脚本的内容:
{
# set requested point in time
set until scn 10903430793309;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 2 to new;
set newname for clone tempfile 1 to new;
set newname for datafile 6 to
“F:\rmanbackup\td\O_ORACLE.DBF”;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 3, 2, 6;
switch clone datafile all;
}
正在执行内存脚本
正在执行命令: SET until clause
正在执行命令: SET NEWNAME
正在执行命令: SET NEWNAME
正在执行命令: SET NEWNAME
正在执行命令: SET NEWNAME
正在执行命令: SET NEWNAME
临时文件 1 在控制文件中已重命名为 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF_TEMP_%U_.T
MP
启动 restore 于 12-9月 -11
使用通道 ORA_AUX_DISK_1
通道 ORA_AUX_DISK_1: 正在开始还原数据文件备份集
通道 ORA_AUX_DISK_1: 正在指定从备份集还原的数据文件
通道 ORA_AUX_DISK_1: 将数据文件 00001 还原到 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF
_SYSTEM_%U_.DBF
通道 ORA_AUX_DISK_1: 将数据文件 00003 还原到 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF
_UNDOTBS1_%U_.DBF
通道 ORA_AUX_DISK_1: 将数据文件 00002 还原到 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF
_SYSAUX_%U_.DBF
通道 ORA_AUX_DISK_1: 将数据文件 00006 还原到 F:\rmanbackup\td\O_ORACLE.DBF
通道 ORA_AUX_DISK_1: 正在读取备份片段 F:\RMANBACKUP\9_12_0GMMD2KI_1_1
通道 ORA_AUX_DISK_1: 段句柄 = F:\RMANBACKUP\9_12_0GMMD2KI_1_1 标记 = TAG20110912
T215425
通道 ORA_AUX_DISK_1: 已还原备份片段 1
通道 ORA_AUX_DISK_1: 还原完成, 用时: 00:03:55
完成 restore 于 12-9月 -11
数据文件 1 已转换成数据文件副本
输入数据文件副本 RECID=19 STAMP=761695711 文件名=F:\RMANBACKUP\AD\XFF\DATAFILE\O
1_MF_SYSTEM_76W4CMJO_.DBF
数据文件 3 已转换成数据文件副本
输入数据文件副本 RECID=20 STAMP=761695711 文件名=F:\RMANBACKUP\AD\XFF\DATAFILE\O
1_MF_UNDOTBS1_76W4CSVY_.DBF
数据文件 2 已转换成数据文件副本
输入数据文件副本 RECID=21 STAMP=761695711 文件名=F:\RMANBACKUP\AD\XFF\DATAFILE\O
1_MF_SYSAUX_76W4CMM9_.DBF
数据文件 6 已转换成数据文件副本
输入数据文件副本 RECID=22 STAMP=761695711 文件名=F:\RMANBACKUP\TD\O_ORACLE.DBF
内存脚本的内容:
{
# set requested point in time
set until scn 10903430793309;
# online the datafiles restored or switched
sql clone “alter database datafile 1 online”;
sql clone “alter database datafile 3 online”;
sql clone “alter database datafile 2 online”;
sql clone “alter database datafile 6 online”;
# recover and open resetlogs
recover clone database tablespace “O_ORACLE”, “SYSTEM”, “UNDOTBS1”, “SYSAUX” de
lete archivelog;
alter clone database open resetlogs;
}
正在执行内存脚本
正在执行命令: SET until clause
sql 语句: alter database datafile 1 online
sql 语句: alter database datafile 3 online
sql 语句: alter database datafile 2 online
sql 语句: alter database datafile 6 online
启动 recover 于 12-9月 -11
使用通道 ORA_AUX_DISK_1
正在开始介质的恢复
线程 1 序列 177 的归档日志已作为文件 E:\ORACLE\ARCHIVELOG\ARC0000000177_07534894
09.0001 存在于磁盘上
线程 1 序列 178 的归档日志已作为文件 E:\ORACLE\ARCHIVELOG\ARC0000000178_07534894
09.0001 存在于磁盘上
归档日志文件名=E:\ORACLE\ARCHIVELOG\ARC0000000177_0753489409.0001 线程=1 序列=17
7
归档日志文件名=E:\ORACLE\ARCHIVELOG\ARC0000000178_0753489409.0001 线程=1 序列=17
8
介质恢复完成, 用时: 00:00:16
完成 recover 于 12-9月 -11
数据库已打开
内存脚本的内容:
{
# make read only the tablespace that will be exported
sql clone ‘alter tablespace O_ORACLE read only’;
# create directory for datapump export
sql clone “create or replace directory STREAMS_DIROBJ_DPDIR as ”
F:\rmanbackup\td””;
}
正在执行内存脚本
sql 语句: alter tablespace O_ORACLE read only
sql 语句: create or replace directory STREAMS_DIROBJ_DPDIR as ”F:\rmanbackup\td

正在执行元数据导出…
EXPDP> 启动 “SYS”.”TSPITR_EXP_enEv”:
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/TABLE
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/INDEX
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/COMMENT
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/TRIGGER
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> 处理对象类型 TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> 已成功加载/卸载了主表 “SYS”.”TSPITR_EXP_enEv”
EXPDP> **********************************************************************
********
EXPDP> SYS.TSPITR_EXP_enEv 的转储文件集为:
EXPDP> F:\RMANBACKUP\TD\DMPFILE.DMP
EXPDP> **********************************************************************
********
EXPDP> 可传输表空间 O_ORACLE 所需的数据文件:
EXPDP> F:\RMANBACKUP\TD\O_ORACLE.DBF
EXPDP> 作业 “SYS”.”TSPITR_EXP_enEv” 已于 22:12:39 成功完成
导出完毕
/*
The following command may be used to import the tablespaces.
Substitute values for and .
impdp directory= dumpfile= ‘dmpfile.dmp’ transport_datafil
es= F:\rmanbackup\td\O_ORACLE.DBF
*/
————————————————————–
— Start of sample PL/SQL script for importing the tablespaces
————————————————————–
— creating directory objects
CREATE DIRECTORY STREAMS$DIROBJ$1 AS ‘F:\rmanbackup\td\’;
CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS ‘F:\rmanbackup\td’;
/* PL/SQL Script to import the exported tablespaces */
DECLARE
— the datafiles
tbs_files dbms_streams_tablespace_adm.file_set;
cvt_files dbms_streams_tablespace_adm.file_set;
— the dumpfile to import
dump_file dbms_streams_tablespace_adm.file;
dp_job_name VARCHAR2(30) := NULL;
— names of tablespaces that were imported
ts_names dbms_streams_tablespace_adm.tablespace_set;
BEGIN
— dump file name and location
dump_file.file_name := ‘dmpfile.dmp’;
dump_file.directory_object := ‘STREAMS$DIROBJ$DPDIR’;
— forming list of datafiles for import
tbs_files( 1).file_name := ‘O_ORACLE.DBF’;
tbs_files( 1).directory_object := ‘STREAMS$DIROBJ$1’;
— import tablespaces
dbms_streams_tablespace_adm.attach_tablespaces(
datapump_job_name => dp_job_name,
dump_file => dump_file,
tablespace_files => tbs_files,
converted_files => cvt_files,
tablespace_names => ts_names);
— output names of imported tablespaces
IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
FOR i IN ts_names.first .. ts_names.last LOOP
dbms_output.put_line(‘imported tablespace ‘|| ts_names(i));
END LOOP;
END IF;
END;
/
— dropping directory objects
DROP DIRECTORY STREAMS$DIROBJ$1;
DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
————————————————————–
— End of sample PL/SQL script
————————————————————–
删除自动实例
关闭自动实例
数据库已关闭
数据库已卸装
Oracle 实例已关闭
自动实例已删除
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF_TEMP_76W4N51K_.TMP
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\ONLINELOG\O1_MF_3_76W4MVQS_.LOG
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\ONLINELOG\O1_MF_2_76W4MV1H_.LOG
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\ONLINELOG\O1_MF_1_76W4MT2Q_.LOG
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF_SYSAUX_76W4CMM9_.DBF
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF_UNDOTBS1_76W4CSVY_.DBF
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\DATAFILE\O1_MF_SYSTEM_76W4CMJO_.DBF
已删除辅助实例文件 F:\RMANBACKUP\AD\XFF\CONTROLFILE\O1_MF_76W4C7XM_.CTL
最终生成需要处理的文件与处理

复制上面文件到目标端适当位置,然后可以修改并执行sql文件实现表传输表空间,或者使用impdp只是实现
相关说明
1、在使用rman之前,需要检查平台支持情况,如果不支持,需要先转换,然后使用catalog start with处理(10g),如果9i其他变通办法
2、在rman处理传输表空间的过程中,可以指定scn或者时间,既不完成恢复
UNTIL SCN 11379;或者UNTIL TIME ‘SYSDATE-1’;
3、rman的备份不能是resetlogs 打开数据库之前的
4、主要是利用10g的辅助实例自动实现处理,如果是9i,需要人工建立辅助实例

object_id和data_object_id区别与联系

其实object_id和data_object_id同样是表示数据库对象的一个唯一标志,但是object_id表示的是逻辑id,data_object_id表示的是物理id。如果一些object没有物理属性的话那它就不存在data_object_id,例如procedure,function,package,data type,db link,mv定义,view定义,临时表,分区表定义等等这些object都是没有对应着某个segment,因此它们的data_object_id都为空。

当表刚创建的时候它的object_id和data_object_id都是相等的,但是如果表经过move或truncate等,涉及到segment发生改变后data_object_id将会有变化。

DATA_OBJECT_ID was introduced in 8.0 to track versions of the same segment (certain operations change the version). It is used to discover stale ROWIDs and stale undo records.

SQL> create table xff as select * from dba_objects where rownum<100;
Table created
SQL> select object_id,data_object_id from user_objects where object_name='XFF';
 OBJECT_ID DATA_OBJECT_ID
---------- --------------
    211325         211325
SQL> alter table xff move;
Table altered
SQL> select object_id,data_object_id from user_objects where object_name='XFF';
 OBJECT_ID DATA_OBJECT_ID
---------- --------------
    211325         211326
SQL> truncate table xff;
Table truncated
SQL> create index ind_xff on xff(object_id);
Index created
SQL>  select object_id,data_object_id from user_objects where object_name='IND_XFF';
 OBJECT_ID DATA_OBJECT_ID
---------- --------------
    211328         211328
SQL> ALTER INDEX IND_XFF REBUILD;
Index altered
SQL>  select object_id,data_object_id from user_objects where object_name='IND_XFF';
 OBJECT_ID DATA_OBJECT_ID
---------- --------------
    211328         211329

Oracle 传输表空间

0、检查平台信息
所有tts支持平台
SELECT * FROM V$TRANSPORTABLE_PLATFORM;
当前系统平台情况
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
一、源端操作
检查是否符合TTS要求
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(‘ODU’, TRUE);
PL/SQL procedure successfully completed.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
SQL> SELECT COUNT(*) FROM DBA_TABLES WHERE TABLESPACE_NAME=’ODU’;
COUNT(*)
———-
59
SQL> SELECT file_name from dba_data_files where tablespace_name=’ODU’;
FILE_NAME
————————————————–
/opt/oracle/oradata/chf/odu01.dbf
/opt/oracle/oradata/chf/odu02.dbf
需要传输表空间至于readonly模式
SQL> ALTER TABLESPACE ODU READ ONLY;
Tablespace altered.
导出表空间元数据
[oracle@node1 ~]$ exp userid=\’/ as sysdba\’ tablespaces=ODU file=/tmp/ODU.dmp transport_tablespace=y
Export: Release 10.2.0.4.0 – Production on Sun Sep 11 10:01:52 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata…
For tablespace ODU …
. exporting cluster definitions
. exporting table definitions
. . exporting table T_ODU_03
. . exporting table T_ODU_01
. . exporting table T_ODU
. . exporting table DB
. . exporting table NODE
. . exporting table CONF
. . exporting table DBINC
. . exporting table CKP
. . exporting table TS
. . exporting table TSATT
. . exporting table DF
. . exporting table DFATT
. . exporting table TF
. . exporting table TFATT
. . exporting table OFFR
. . exporting table RR
. . exporting table RT
. . exporting table ORL
. . exporting table RLH
. . exporting table AL
. . exporting table BS
. . exporting table BP
. . exporting table BCF
. . exporting table CCF
. . exporting table XCF
. . exporting table BSF
. . exporting table BDF
. . exporting table CDF
. . exporting table XDF
. . exporting table BRL
. . exporting table BCB
. . exporting table CCB
. . exporting table SCR
. . exporting table SCRL
. . exporting table CONFIG
. . exporting table XAL
. . exporting table RSR
. . exporting table FB
. . exporting table GRSP
. . exporting table ROUT
. . exporting table RCVER
. . exporting table F_DROP
. . exporting table T_QUERY
. . exporting table T_UNDO
. . exporting table A
. . exporting table T1
. . exporting table T2_1
. . exporting table T2
. . exporting table T_MV
. . exporting table TAB2
. . exporting table MLOG$_T_MV
. . exporting table T_N
. . exporting table T_M
. . exporting table MLOG$_T_N
. . exporting table T_1
. . exporting table T_2
. . exporting table T_3
. . exporting table T_4
. . exporting table T_5
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
SQL> alter tablespace odu read write;
Tablespace altered.
传输到目标段
[oracle@node1 ~]$ scp /opt/oracle/oradata/chf/odu0* 192.168.11.12:/opt/oracle/oradata/test
The authenticity of host ‘192.168.11.12 (192.168.11.12)’ can’t be established.
RSA key fingerprint is db:3c:b4:34:7f:d7:e4:97:ab:b6:8b:b0:ab:22:43:35.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.11.12’ (RSA) to the list of known hosts.
oracle@192.168.11.12’s password:
odu01.dbf 100% 100MB 3.3MB/s 00:30
odu02.dbf 100% 11GB 2.8MB/s 1:05:00
[oracle@node1 ~]$ scp /tmp/ODU.dmp 192.168.11.12:/tmp
oracle@192.168.11.12’s password:
Permission denied, please try again.
oracle@192.168.11.12’s password:
ODU.dmp 100% 456KB 456.0KB/s 00:00
二、目标端操作
导入元数据库
[oracle@ECP-UC-DB1 ~]$ imp userid=\’/ as sysdba\’ tablespaces=ODU file=/tmp/ODU.dmp transport_tablespace=y datafiles=/opt/oracle/oradata/test/odu01.dbf, /opt/oracle/oradata/test/odu02.dbf fromuser=chf touser=chf
Import: Release 10.2.0.4.0 – Production on Sun Sep 11 11:13:25 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V10.02.01 via conventional path
About to import transportable tablespace(s) metadata…
import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
. importing CHF’s objects into CHF
. . importing table “T_ODU_03”
. . importing table “T_ODU_01”
. . importing table “T_ODU”
. . importing table “DB”
. . importing table “NODE”
. . importing table “CONF”
. . importing table “DBINC”
. . importing table “CKP”
. . importing table “TS”
. . importing table “TSATT”
. . importing table “DF”
. . importing table “DFATT”
. . importing table “TF”
. . importing table “TFATT”
. . importing table “OFFR”
. . importing table “RR”
. . importing table “RT”
. . importing table “ORL”
. . importing table “RLH”
. . importing table “AL”
. . importing table “BS”
. . importing table “BP”
. . importing table “BCF”
. . importing table “CCF”
. . importing table “XCF”
. . importing table “BSF”
. . importing table “BDF”
. . importing table “CDF”
. . importing table “XDF”
. . importing table “BRL”
. . importing table “BCB”
. . importing table “CCB”
. . importing table “SCR”
. . importing table “SCRL”
. . importing table “CONFIG”
. . importing table “XAL”
. . importing table “RSR”
. . importing table “FB”
. . importing table “GRSP”
. . importing table “ROUT”
. . importing table “RCVER”
. . importing table “F_DROP”
. . importing table “T_QUERY”
. . importing table “T_UNDO”
. . importing table “A”
. . importing table “T1”
. . importing table “T2_1”
. . importing table “T2”
. . importing table “T_MV”
. . importing table “TAB2”
. . importing table “MLOG$_T_MV”
. . importing table “T_N”
. . importing table “T_M”
. . importing table “MLOG$_T_N”
. . importing table “T_1”
. . importing table “T_2”
. . importing table “T_3”
. . importing table “T_4”
. . importing table “T_5”
About to enable constraints…
Import terminated successfully without warnings.
SQL> select tablespace_name ,status from dba_tablespaces;
TABLESPACE_NAME STATUS
—————————— ———
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
TEMP ONLINE
USERS ONLINE
XFF ONLINE
ODU READ ONLY
7 rows selected.
修改为readwrite模式(根据需求)
SQL> alter tablespace odu read write;
Tablespace altered.
SQL> SELECT COUNT(*) FROM DBA_TABLES WHERE TABLESPACE_NAME=’ODU’;
COUNT(*)
———-
59
三、相关说明
1、如果平台字节顺序不同,需要使用rman convert转换
2、导出导入元数据可以使用data pump实现
3、检查视图、触发器、包、过程、函数等对象,如果没有需要使用exp/imp row=n导入或者人工建立