误删除dual表恢复

1.10G中删除dual表恢复

SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> select object_type,owner from dba_objects where object_name='DUAL';
OBJECT_TYPE         OWNER
------------------- ------------------------------
TABLE               SYS
SYNONYM             PUBLIC
SQL> drop table sys.dual;
Table dropped.
SQL> select object_type from dba_objects where object_name='DUAL';
OBJECT_TYPE
-------------------
SYNONYM
SQL> SELECT SYSDATE FROM dual;
SELECT SYSDATE FROM dual
                    *
ERROR at line 1:
ORA-01775: looping chain of synonyms
SQL> CREATE TABLE XFF AS SELECT * from dba_objects;
Table created.
SQL> drop table xff purge;
drop table xff purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01775: looping chain of synonyms
设置10046跟踪会话发现,在每次删除表操作的时候发现如下错误
select dummy from dual where  ora_dict_obj_type = 'TABLE'

其实这里错误都很明显,是因为dual表不存在了,表对应的同义词还存在,当查询dual的时候,会去查询同义词,然后该同义词去找表,而表不存在,所以出现上述的ORA-01775: looping chain of synonyms错误

2.解决方法

SQL> CREATE TABLE "SYS"."DUAL"
  2     (       "DUMMY" VARCHAR2(1)
  3     ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  4    STORAGE(INITIAL 16384 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  5    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  6    TABLESPACE "SYSTEM" ;
Table created.
SQL> GRANT SELECT ON "SYS"."DUAL" TO PUBLIC WITH GRANT OPTION;
Grant succeeded.
SQL> insert into dual values('X');
1 row created.
SQL> COMMIT;
Commit complete.
--编译对象
SQL> @?/rdbms/admin/utlrp.sql

3.测试结果

SQL> select sysdate from dual;
SYSDATE
------------
13-MAR-12
SQL> drop table xff purge;
Table dropped.

rman 备份出现ORA-00245/RMAN-08132

rman备份出现如下错误

RMAN> backup full  tag 'dbfull' format '/jfkdata1/rman_bak/full_%d_%u' database
 2>   include current controlfile  plus archivelog  format '/jfkdata1/rman_bak/arch_%d_%u';
Starting backup at 13-MAR-12
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=297 instance=ykcdb2 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=29 RECID=43 STAMP=777808820
input archived log thread=1 sequence=30 RECID=46 STAMP=777816035
input archived log thread=2 sequence=21 RECID=45 STAMP=777816033
channel ORA_DISK_1: starting piece 1 at 13-MAR-12
channel ORA_DISK_1: finished piece 1 at 13-MAR-12
piece handle=/jfkdata1/rman_bak/arch_YKCDB_01n5p1vf tag=DBFULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 13-MAR-12
Starting backup at 13-MAR-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/jfkdata1/oradata/ykcdb/sysaux01.dbf
input datafile file number=00001 name=/jfkdata1/oradata/ykcdb/system01.dbf
input datafile file number=00003 name=/jfkdata1/oradata/ykcdb/undotbs01.dbf
input datafile file number=00005 name=/jfkdata1/oradata/ykcdb/undotbs02.dbf
input datafile file number=00004 name=/jfkdata1/oradata/ykcdb/users01.dbf
channel ORA_DISK_1: starting piece 1 at 13-MAR-12
channel ORA_DISK_1: finished piece 1 at 13-MAR-12
piece handle=/jfkdata1/rman_bak/full_YKCDB_02n5p209 tag=DBFULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 13-MAR-12
channel ORA_DISK_1: finished piece 1 at 13-MAR-12
piece handle=/jfkdata1/rman_bak/full_YKCDB_03n5p213 tag=DBFULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-MAR-12
Starting backup at 13-MAR-12
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=22 RECID=48 STAMP=777816102
input archived log thread=1 sequence=31 RECID=47 STAMP=777816101
channel ORA_DISK_1: starting piece 1 at 13-MAR-12
channel ORA_DISK_1: finished piece 1 at 13-MAR-12
piece handle=/jfkdata1/rman_bak/arch_YKCDB_04n5p217 tag=DBFULL comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-MAR-12
ORA-00245: control file backup operation failed
RMAN-08132: WARNING: cannot update recovery area reclaimable file list
RMAN>

从这里可以看出数据库文件,归档日志,spfile和controlfile都备份完成,后面又出现一个controlfile失败,应该是控制文件快照备份失败。

错误原因

RMAN creates a copy of the control file for read consistency, this is the snapshot controlfile.
Due to the changes made to the controlfile backup mechanism in 11gR2 any instances
in the cluster may write to the snapshot controlfile. Therefore,
the snapshot controlfile file needs to be visible to all instances.
The same happens when a backup of the controlfile is created directly from
sqlplus any instance in the cluster may write to the backup controfile file.
In 11gR2 onwards, the controlfile backup happens without holding the control file enqueue.
For non-RAC database, this doesn't change anything.
But, for RAC database, the snapshot controlfile location
must be in a shared file system that will be accessible from all the nodes.
The snapshot controlfile MUST be  accessible by all nodes of a RAC database.

大致意思就是rman的snapshot controlfile必须放在共享存储之上。

解决方法

The snapshot controlfile MUST be accessible by all nodes of a RAC database,
if the snapshot controlfile does not reside in on a shared device this error will raise.
1. Check the snapshot controlfile location:
RMAN> show all;
2. Configure the snapshot controlfile to a shared disk:
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '<shared_disk>/snapcf_<DBNAME>.f';

TimesTen c/s模式配置

一、linux/unix环境配置
1.相关IP配置
客户端:192.168.1.105
服务端:192.168.1.10

2.服务端ODBC配置

$ more sys.odbc.ini
[ODBC Data Sources]
TT_1122=TimesTen 11.2.2 Driver
[TT_1122]
Driver=/u01/TimesTen/tt1122/lib/libtten.so
DataStore=/u01/TimesTen/tt1122/info/TT_1122
DatabaseCharacterSet=ZHS16GBK
PermSize=64
OracleNetServiceName=XFF

3.服务端timesten监听端口

$ ttstatus|grep "TimesTen server"
TimesTen server pid 3792 started on port 53393

4.客户端配置

$ more sys.ttconnect.ini
[tt_xifenfei_c_s]
Network_Address=192.168.1.10
TCP_PORT=53393
$ more sys.odbc.ini
[ODBC Data Sources]
tt_xifenfei_c=TimesTen 11.2.2 Client Driver
[tt_xifenfei_c]
TTC_SERVER=tt_xifenfei_c_s
TTC_SERVER_DSN=tt_1122
UID=chf
PWD=xifenfei

参考:在另一节点配置TT客户端的简明方法

二、win环境配置
1.在odbc中选择增加timesten client

2.点击services,增加ip和端口配置

3.配置关闭services,配置其他选项

三、补充说明
这里的配置,基本上都是最基础配置,在实际生产环境中,需要考虑其他相关环境变量

gzexe加密shell脚本

很多时候,我们的脚本中涉及的用户名/密码/ip等敏感信息,我们需要使用一些加密方面来屏蔽这些信息,确保我们的系统安全(主要防菜鸟),虽然shc可以实现强大的加密功能,但是他需要另外安装shc软件,比较麻烦,应对一般的加密,个人推荐直接喜用系统自带的gzexe(大部分Linux,unix都有)

[root@xifenfei tmp]# more xifenfei.sh
#!/usr/bin/ksh
echo 'xifenfei--'`date`
[root@xifenfei tmp]# ./xifenfei.sh
xifenfei--Tue Mar 6 13:31:35 CST 2012
[root@xifenfei tmp]# ls -l xifenfei.sh
-rwxr-xr-x 1 root root 40 Mar  6 13:30 xifenfei.sh
--加密操作
[root@xifenfei tmp]# gzexe xifenfei.sh
xifenfei.sh:     -5.0%
[root@xifenfei tmp]# ls -l xifenfei.sh*
-rwxr-xr-x 1 root root 449 Mar  6 13:31 xifenfei.sh
-rwxr-xr-x 1 root root  40 Mar  6 13:30 xifenfei.sh~
--加密后文件内容
[root@xifenfei tmp]# strings xifenfei.sh
#!/bin/sh
skip=14
tmpdir=`/bin/mktemp -d ${TMPDIR:-/tmp}/gzexe.XXXXXXXXXX` || exit 1
prog="${tmpdir}/`echo \"$0\" | sed 's|^.*/||'`"
if /usr/bin/tail -n +$skip "$0" | "/bin"/gzip -cd > "$prog"; then
  /bin/chmod 700 "$prog"
  trap '/bin/rm -rf $tmpdir; exit $res' EXIT
  "$prog" ${1+"$@"}; res=$?
else
  echo "Cannot decompress $0"
  /bin/rm -rf $tmpdir
  exit 1
fi; exit $res
xifenfei.sh
UOHI,IM

这里的xifenfei.sh是加密后文件,xifenfei.sh~是源码文件(加密前文件),只需要运行xifenfie.sh即可实现原脚本功能
优点:在大部分系统上均可运行,不用安装额外软件
缺点:破解容易,放君子不防小人

Multiple-table cache group配置

1.准备测试表和数据

SQL> CREATE TABLE customer
  2  (cust_num NUMBER(6) NOT NULL PRIMARY KEY,
  3   name     VARCHAR2(50)
  4  );
Table created.
SQL> CREATE TABLE orders
  2  (ord_num      NUMBER(10) NOT NULL PRIMARY KEY,
  3   cust_num     NUMBER(6) NOT NULL
  4  );
Table created.
SQL> insert into customer values(1,'wwww.xifenfei.com1');
1 row created.
SQL> insert into customer values(2,'wwww.xifenfei.com2');
1 row created.
SQL> insert into customer values(3,'wwww.xifenfei.com3');
1 row created.
SQL> insert into customer values(4,'wwww.xifenfei.com4');
1 row created.
SQL> insert into orders(cust_num,ord_num) values(1,1);
1 row created.
SQL> insert into orders (cust_num,ord_num) values(1,2);
1 row created.
SQL> insert into orders (cust_num,ord_num) values(3,5);
1 row created.
SQL> insert into orders (cust_num,ord_num) values(3,6);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from customer;
  CUST_NUM NAME
---------- --------------------------------------------------
         1 wwww.xifenfei.com1
         2 wwww.xifenfei.com2
         3 wwww.xifenfei.com3
         4 wwww.xifenfei.com4
SQL> select * from orders;
   ORD_NUM   CUST_NUM
---------- ----------
         1          1
         2          1
         5          3
         6          3
SQL> grant select on oratt.customer to cacheuser;
Grant succeeded.
SQL> grant select on oratt.orders to cacheuser;
Grant succeeded.

2.创建cache group

[oracle@xifenfei ~]$ ttIsql "DSN=tt_1122;UID=cacheuser;PWD=timesten;OraclePWD=oracle"
Command>  drop cache group cacheuser.customer_orders;
Command> CREATE READONLY CACHE GROUP customer_orders
       > AUTOREFRESH INTERVAL 5 SECONDS
       > STATE ON
       > FROM oratt.customer
       >  (cust_num NUMBER(6) NOT NULL,
       >   name     VARCHAR2(50),
       >   PRIMARY KEY(cust_num)),
       > oratt.orders
       >  (ord_num      NUMBER(10) NOT NULL,
       >   cust_num     NUMBER(6) NOT NULL,
       >   PRIMARY KEY(ord_num),
       >   FOREIGN KEY(cust_num) REFERENCES oratt.customer(cust_num));
Command> cachegroups;
Cache Group CACHEUSER.CUSTOMER_ORDERS:
  Cache Group Type: Read Only
  Autorefresh: Yes
  Autorefresh Mode: Incremental
  Autorefresh State: On
  Autorefresh Interval: 5 Seconds
  Autorefresh Status: ok
  Aging: No aging defined
  Root Table: ORATT.CUSTOMER
  Table Type: Read Only
  Child Table: ORATT.ORDERS
  Table Type: Read Only
1 cache groups found.

3.TT中表访问授权

[oracle@xifenfei ~]$ ttisql tt_1122
Command> grant select on oratt.customer to cacheuser;
Command> grant select on oratt.orders to cacheuser;

4.测试数据初始化

[oracle@xifenfei ~]$ ttIsql "DSN=tt_1122;UID=cacheuser;PWD=timesten;OraclePWD=oracle"
Command> select * from oratt.customer;
< 1, wwww.xifenfei.com1 >
< 2, wwww.xifenfei.com2 >
< 3, wwww.xifenfei.com3 >
< 4, wwww.xifenfei.com4 >
4 rows found.
Command> select * from oratt.orders;
< 1, 1 >
< 2, 1 >
< 5, 3 >
< 6, 3 >
4 rows found.

5.ORACLE修改数据

SQL> update customer set name='xifenfei' where cust_num=2;
1 row updated.
SQL> insert into customer values(5,'wwww.xifenfei.com5');
1 row created.
SQL> delete from customer where cust_num=1;
1 row deleted.
SQL> commit;
Commit complete.

6.TT中验证数据

Command> select * from oratt.customer;
< 2, xifenfei >
< 3, wwww.xifenfei.com3 >
< 4, wwww.xifenfei.com4 >
< 5, wwww.xifenfei.com5 >
4 rows found.
Command> select * from oratt.orders;
< 5, 3 >
< 6, 3 >
2 rows found.

7.补充说明
7.1)在oracle中需要授权cacheuser有访问oratt中相关表权限,不然创建cache group失败
7.2)自动刷新数据需要设置AUTOREFRESH STATE ON,其他方法初始化关联表的数据暂未知
7.3)在TT中,关联表删除是级联的

记录一次ORA-600[13013]处理过程

在一次数据库的异常处理完成后,发现alert日志中出现ORA-600[13013]错误

Thu Mar 08 23:29:37 2012
Errors in file /opt/oracle/diag/rdbms/chf/chf/trace/chf_smon_24137.trc  (incident=38681):
ORA-00600: internal error code, arguments: [13013], [5001], [518], [4198427], [170], [4198427], [17], [], [], [], [], []
Incident details in: /opt/oracle/diag/rdbms/chf/chf/incident/incdir_38681/chf_smon_24137_i38681.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Non-fatal internal error happenned while SMON was doing flushing of monitored table stats.
SMON encountered 1 out of maximum 100 non-fatal internal errors.

trace文件中信息
从这里可以看出是对sys.col_usage$表进行update操作导致该错误发生

Dump continued from file: /opt/oracle/diag/rdbms/chf/chf/trace/chf_smon_24137.trc
ORA-00600: internal error code, arguments: [13013], [5001], [518], [4198427], [170], [4198427], [17], [], [], [], [], []
========= Dump for incident 38681 (ORA 600 [13013]) ========
*** 2012-03-08 23:29:37.400
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=3c1kubcdjnppq) -----
update sys.col_usage$ set   equality_preds    = equality_preds    + decode(bitand(:flag,1),0,0,1),   equijoin_preds    = equijoin_preds    + decode(bitand(:flag,2),0,0
,1),   nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0,1),   range_preds       = range_preds       + decode(bitand(:flag,8),0,0,1),   like_preds
    = like_preds        + decode(bitand(:flag,16),0,0,1),   null_preds        = null_preds        + decode(bitand(:flag,32),0,0,1),  timestamp = :time where obj# = :ob
jn and intcol# = :coln

MOS中关于ORA-600 [13013]描述

Format: ORA-600 [13013] [a] [b] {c} [d] [e] [f]
Arg [a] Passcount
Arg [b] Data Object number
Arg {c} Tablespace Decimal Relative DBA (RDBA) of block containing the row to be updated
Arg [d] Row Slot number
Arg [e] Decimal RDBA of block being updated (Typically same as {c})
Arg [f] Code

验证MOS中描述

SQL> select dbms_utility.data_block_address_file(4198427) rfile,
  2  dbms_utility.data_block_address_block(4198427) blocks
  3  from dual;
     RFILE     BLOCKS
---------- ----------
         1       4123
SQL> SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, A.PARTITION_NAME
  2    FROM DBA_EXTENTS A
  3   WHERE FILE_ID = &FILE_ID
  4     AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
Enter value for file_id: 1
old   3:  WHERE FILE_ID = &FILE_ID
new   3:  WHERE FILE_ID = 1
Enter value for block_id: 4123
old   4:    AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1
new   4:    AND 4123 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1
OWNER SEGMENT_NAME SEGMENT_TY TABLESPACE PARTI
----- ------------ ---------- ---------- -----
SYS   COL_USAGE$   TABLE      SYSTEM
--和trace文件中异常表一致
SQL>  select object_type,object_name from dba_objects where object_id=518;
OBJECT_TYPE         OBJECT_NAME
------------------- ------------------------------
TABLE               COL_USAGE$
--也和trace文件中异常表一致

分析异常表

SQL> ANALYZE TABLE sys.COL_USAGE$ VALIDATE STRUCTURE CASCADE;
ANALYZE TABLE sys.COL_USAGE$ VALIDATE STRUCTURE CASCADE
*
ERROR at line 1:
ORA-01499: table/index cross reference failure - see trace file
SQL> select index_name,COLUMN_NAME,COLUMN_POSITION FROM DBA_IND_COLUMNS
2    WHERE TABLE_NAME='COL_USAGE$';
INDEX_NAME      COLUMN_NAM COLUMN_POSITION
--------------- ---------- ---------------
I_COL_USAGE$    OBJ#                     1
I_COL_USAGE$    INTCOL#                  2
SQL> set autot trace exp
SQL>  SELECT /*+ FULL(t1) */ OBJ#,INTCOL#
  2   FROM sys.COL_USAGE$ t1
  3   MINUS
  4   SELECT /*+ index(t I_COL_USAGE$) */ OBJ#,INTCOL#
  5   FROM sys.COL_USAGE$ t where OBJ# is not null or INTCOL# is not null;
no rows selected
--无记录返回
Execution Plan
----------------------------------------------------------
Plan hash value: 399371572
------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time
   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |  4262 | 76716 |    27  (71)| 00:00:01 |
|   1 |  MINUS              |              |       |       |            |          |
|   2 |   SORT UNIQUE       |              |  4262 | 38358 |     9  (12)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| COL_USAGE$   |  4262 | 38358 |     8   (0)| 00:00:01 |
|   4 |   SORT UNIQUE NOSORT|              |  4262 | 38358 |    18   (6)| 00:00:01 |
|*  5 |    INDEX FULL SCAN  | I_COL_USAGE$ |  4262 | 38358 |    17   (0)| 00:00:01 |
------------------------------------------------------------------------------------
--验证表两个sql是否正确(一个全表扫描,另个index 快速扫描)
SQL>  SELECT /*+ index(t I_COL_USAGE$) */ OBJ#,INTCOL#
  2   FROM sys.COL_USAGE$ t where OBJ# is not null or INTCOL# is not null
  3   MINUS
  4  SELECT /*+ FULL(t1) */ OBJ#,INTCOL#
  5   FROM sys.COL_USAGE$ t1;
      OBJ#    INTCOL#
---------- ----------
4294951004          2
4294951004          3
4294951004          4
4294951004         26
4294951004         27
4294951037          4
4294951037          5
4294951037          6
4294951037          9
4294951037         10
4294951840         11
      OBJ#    INTCOL#
---------- ----------
4294951840         12
4294951906          4
4294952709          3
4294952867          4
4294952867          9
16 rows selected.
--证明index中的记录比表中多了16条

解决问题并验证

SQL> alter index sys.I_COL_USAGE$ rebuild online;
Index altered.
SQL>  SELECT /*+ FULL(t1) */ OBJ#,INTCOL#
 FROM sys.COL_USAGE$ t1
  2    3   MINUS
  4   SELECT /*+ index(t I_COL_USAGE$) */ OBJ#,INTCOL#
  5   FROM sys.COL_USAGE$ t where OBJ# is not null or INTCOL# is not null
  6  ;
no rows selected
SQL>  SELECT /*+ index(t I_COL_USAGE$) */ OBJ#,INTCOL#
  2   FROM sys.COL_USAGE$ t where OBJ# is not null or INTCOL# is not null
  3   MINUS
  4  SELECT /*+ FULL(t1) */ OBJ#,INTCOL#
  5   FROM sys.COL_USAGE$ t1;
no rows selected

这次出现此问题的原因是因为在更新语句中使用索引找到一条记录,然后到表中去查询时该记录不存在,出现此错误,一般解决方法是重建索引
官方关于ORA-600[13013]说明

使用_allow_resetlogs_corruption打开无归档日志rman备份库

rman还原恢复操作

--还原数据库
RMAN> restore database;
--恢复数据库
RMAN> recover database;
Starting recover at 2012-03-08 21:20:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=65 device type=DISK
starting media recovery
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/08/2012 21:20:47
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 2936 and starting SCN of 25991695 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 2935 and starting SCN of 25991652 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 2934 and starting SCN of 25991649 found to restore
……………………
RMAN-06025: no backup of archived log for thread 1 with sequence 2902 and starting SCN of 25991156 found to restore
这里报日志缺少,实际上是备份的数据库文件后,没有备份归档日志,归档日志全部丢失

进行不完全恢复

SQL> recover database until cancel;
ORA-00279: change 25991194 generated at 03/08/2012 20:33:58 needed for thread 1
ORA-00289: suggestion : /opt/oracle/oradata/archivelog/chf/1_2902_752334071.dbf
ORA-00280: change 25991194 for thread 1 is in sequence #2902
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/oradata/chf/system01.dbf'
ORA-01112: media recovery not started
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/oradata/chf/system01.dbf'

查看相关SCN

SQL> select file#,to_char(checkpoint_change#,'999999999999') from v$datafile;
     FILE# TO_CHAR(CHECK
---------- -------------
         1      25992214
         2      25992214
         3      25992214
         4      25992214
         5      25992214
         6      25992214
         7      25992214
         8      25992214
         9      25992214
        10      25992214
        11      25992214
     FILE# TO_CHAR(CHECK
---------- -------------
        13      25992214
        14      25992214
13 rows selected.
SQL> select file#,online_status,to_char(change#,'999999999999') from v$recover_file;
     FILE# ONLINE_ TO_CHAR(CHANG
---------- ------- -------------
         1 ONLINE       25991194
         2 ONLINE       25991194
         3 ONLINE       25991194
         4 ONLINE       25991194
         5 ONLINE       25991194
         6 ONLINE       25991194
         7 ONLINE       25991194
         8 ONLINE       25991194
         9 ONLINE       25991194
        10 ONLINE       25991194
        11 ONLINE       25991194
     FILE# ONLINE_ TO_CHAR(CHANG
---------- ------- -------------
        13 ONLINE       25991194
        14 ONLINE       25991194
13 rows selected.
SQL> select file#,to_char(checkpoint_change#,'999999999999') from v$datafile_header;
     FILE# TO_CHAR(CHECK
---------- -------------
         1      25991194
         2      25991194
         3      25991194
         4      25991194
         5      25991194
         6      25991194
         7      25991194
         8      25991194
         9      25991194
        10      25991194
        11      25991194
     FILE# TO_CHAR(CHECK
---------- -------------
        13      25991194
        14      25991194
13 rows selected.
--发现数据文件scn和控制文件不一致,重建控制文件,然后查询相关scn
SQL> select file#,to_char(checkpoint_change#,'999999999999') from v$datafile;
     FILE# TO_CHAR(CHECK
---------- -------------
         1      25991194
         2      25991194
         3      25991194
         4      25991194
         5      25991194
         6      25991194
         7      25991194
         8      25991194
         9      25991194
        10      25991194
        11      25991194
     FILE# TO_CHAR(CHECK
---------- -------------
        13      25991194
        14      25991194
13 rows selected.
SQL> select file#,online_status,to_char(change#,'999999999999') from v$recover_file;
     FILE# ONLINE_ TO_CHAR(CHANG
---------- ------- -------------
         1 ONLINE       25991194
         2 ONLINE       25991194
         3 ONLINE       25991194
         4 ONLINE       25991194
         5 ONLINE       25991194
         6 ONLINE       25991194
         7 ONLINE       25991194
         8 ONLINE       25991194
         9 ONLINE       25991194
        10 ONLINE       25991194
        11 ONLINE       25991194
     FILE# ONLINE_ TO_CHAR(CHANG
---------- ------- -------------
        13 ONLINE       25991194
        14 ONLINE       25991194
13 rows selected.
SQL> select file#,to_char(checkpoint_change#,'999999999999') from v$datafile_header;
     FILE# TO_CHAR(CHECK
---------- -------------
         1      25991194
         2      25991194
         3      25991194
         4      25991194
         5      25991194
         6      25991194
         7      25991194
         8      25991194
         9      25991194
        10      25991194
        11      25991194
     FILE# TO_CHAR(CHECK
---------- -------------
        13      25991194
        14      25991194
13 rows selected.
--此时所有scn均一致

尝试打开数据库

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/oradata/chf/system01.dbf'
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 25991194 generated at 03/08/2012 20:33:58 needed for thread 1
ORA-00289: suggestion : /opt/oracle/oradata/archivelog/chf/1_2902_752334071.dbf
ORA-00280: change 25991194 for thread 1 is in sequence #2902
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/oradata/chf/system01.dbf'
ORA-01112: media recovery not started
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/oradata/chf/system01.dbf'

使用隐含参数打开数据库

SQL> create pfile='/tmp/pfile' from spfile;
File created.
-------/tmp/pfile中加上----------
_allow_resetlogs_corruption= TRUE
---------------------------------
SQL> startup mount pfile='/tmp/pfile' force
ORACLE instance started.
Total System Global Area  622149632 bytes
Fixed Size                  2230912 bytes
Variable Size             419431808 bytes
Database Buffers          192937984 bytes
Redo Buffers                7548928 bytes
Database mounted.
SQL> alter database open resetlogs;
Database altered.
SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ WRITE

总结
这次的试验没有多少实际意义,但是可以说明几个问题:
1.所有的数据文件的scn都一致,甚至和控制文件的也一致,数据库不一定可以open成功
(怀疑是数据文件中的scn大于data header scn)
2.对于这样的问题,如果使用bbed修改所有数据文件header的scn不知道是否可以解决
3.如果rman只备份了数据文件而没有任何一个归档日志,数据库通过隐含参数还是可以open,抢救数据

找回ASM中数据文件

很多时候由于asm不能正常启动,导致数据丢失。下面提供两种方法找回asm中的数据文件
一.使用AMDU工具
AMDU是Oracle 11g里自带的一个免费的工具,用于分析ASM磁盘组的元数据以及从不能mount的磁盘组中往外抽取数据文件“NOTE:553639.1 Placeholder for AMDU binaries and using with ASM 10g”明确指出:AMDU也可用于10g,并提供了可用于10g的AMDU的各个操作系统的版本供大家下载。
1.设置LD_LIBRARY_PATH

rac1-> export LD_LIBRARY_PATH=/tmp/amdu

2.查看asm磁盘信息

rac1-> ./amdu -diskstring '/dev/raw/raw*'
amdu_2012_03_05_03_17_19/
rac1-> cd amdu_2012_03_05_03_17_19/
rac1-> ls
report.txt
rac1-> more report.txt
-*-amdu-*-
******************************* AMDU Settings ********************************
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
System name:    Linux
Node name:      rac1
Release:        2.6.9-89.0.0.0.1.ELhugemem
Version:        #1 SMP Tue May 19 04:38:38 EDT 2009
Machine:        i686
amdu run:       05-MAR-12 03:17:19
Endianess:      1
--------------------------------- Operations ---------------------------------
------------------------------- Disk Selection -------------------------------
 -diskstring '/dev/raw/raw*'
------------------------------ Reading Control -------------------------------
------------------------------- Output Control -------------------------------
********************************* DISCOVERY **********************************
----------------------------- DISK REPORT N0001 ------------------------------
                Disk Path: /dev/raw/raw23
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 156 megabytes
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **
----------------------------- DISK REPORT N0002 ------------------------------
                Disk Path: /dev/raw/raw11
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 156 megabytes
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **
----------------------------- DISK REPORT N0003 ------------------------------
                Disk Path: /dev/raw/raw32
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 156 megabytes
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **
----------------------------- DISK REPORT N0004 ------------------------------
                Disk Path: /dev/raw/raw31
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 156 megabytes
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **
----------------------------- DISK REPORT N0005 ------------------------------
                Disk Path: /dev/raw/raw12
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 156 megabytes
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **
----------------------------- DISK REPORT N0006 ------------------------------
                Disk Path: /dev/raw/raw2
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 1976 megabytes
               Group Name: DATA
                Disk Name: DATA_0001
       Failure Group Name: DATA_0001
              Disk Number: 1
            Header Status: 3
       Disk Creation Time: 2012/03/01 22:32:39.289000
          Last Mount Time: 2012/03/05 02:10:02.771000
    Compatibility Version: 0x0a100000
         Disk Sector Size: 512 bytes
         Disk size in AUs: 1976 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2012/03/01 22:32:39.221000
  File 1 Block 1 location: AU 0
----------------------------- DISK REPORT N0007 ------------------------------
                Disk Path: /dev/raw/raw1
           Unique Disk ID:
               Disk Label:
     Physical Sector Size: 512 bytes
                Disk Size: 3137 megabytes
               Group Name: DATA
                Disk Name: DATA_0000
       Failure Group Name: DATA_0000
              Disk Number: 0
            Header Status: 3
       Disk Creation Time: 2012/03/01 22:32:39.289000
          Last Mount Time: 2012/03/05 02:10:02.771000
    Compatibility Version: 0x0a100000
         Disk Sector Size: 512 bytes
         Disk size in AUs: 3137 AUs
         Group Redundancy: 1
      Metadata Block Size: 4096 bytes
                  AU Size: 1048576 bytes
                   Stride: 113792 AUs
      Group Creation Time: 2012/03/01 22:32:39.221000
  File 1 Block 1 location: AU 2
******************************* END OF REPORT ********************************

从这里可以得到信息如下:
1)只有/dev/raw/raw1和raw2是有效的asm磁盘
2)磁盘组只有DATA

3.找出数据文件信息

rac1-> ../amdu -diskstring '/dev/raw/raw*'  -dump 'DATA'
amdu_2012_03_05_03_19_38/
rac1-> cd amdu_2012_03_05_03_19_38/
rac1-> ls
DATA_0001.img  DATA.map  report.txt
rac1-> strings DATA_0001.img |grep DATAFILE
DATAFILE
DATAFILE
DATAFILE
DEVDB/DATAFILE/SYSTEM.256.776817753
DEVDB/DATAFILE/SYSAUX.257.776817753
DEVDB/DATAFILE/UNDOTBS1.258.776817753
DEVDB/DATAFILE/USERS.259.776817753
DEVDB/DATAFILE/UNDOTBS2.267.776817909

从这里可以得出数据文件的file号(第一位数字)

4.恢复数据文件

rac1-> ../../amdu -diskstring '/dev/raw/raw*' -extract 'DATA.259'
amdu_2012_03_05_03_27_21/
rac1-> cd amdu_2012_03_05_03_27_21/
rac1-> ls -l
total 5148
-rw-r--r--  1 oracle oinstall 5251072 Mar  5 03:27 DATA_259.f
-rw-r--r--  1 oracle oinstall    6468 Mar  5 03:27 report.txt
rac1-> dbv file=DATA_259.f
DBVERIFY: Release 10.2.0.1.0 - Production on Mon Mar 5 03:28:00 2012
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
DBVERIFY - Verification starting : FILE = DATA_259.f
DBVERIFY - Verification complete
Total Pages Examined         : 640
Total Pages Processed (Data) : 15
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 2
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 26
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 597
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 442362 (0.442362)

二.使用aulasm工具
1.配置ASM使用磁盘

[root@rac1 oracle]# more disk
/dev/sdb1
/dev/sdb2

2.使用aulasm

[root@rac1 oracle]# ./aulasm_linux32_trial.bin
AUL : AnySQL UnLoader(MyDUL) for Oracle ASM, release 2.0.0
(C) Copyright Lou Fangxin 2011-2012 (AnySQL.net), all rights reserved.
ASM> help
  SET        -- set options (BLOCK_SIZE, AU_SIZE, F1B1_DISK, F1B1_AU)
  OPEN       -- open data file
  ADD        -- add disk with customized disk num (disk, path)
  LIST       -- list opened disks information (to)
  LSAU       -- list file of given AU (disk, au, count)
  FILE       -- list all file entries in ASM (file, [disk, au, block]).
  ALIAS      -- list all alias entries in ASM (file, [disk, au, block]).
  EXTENT     -- list or adjust variable extent (level, au, count).
  FMAP       -- list file extent (file, [disk, au, block], to)
  DUMP       -- dump file extent (file, [disk, au, block], offset, to)
  COPY       -- copy file to os  (file, [disk, au, block], to)
  QUIT/EXIT  -- exit the program.
ASM> open disk
2012-03-05 03:32:48
2012-03-05 03:32:48
ASM> list
2012-03-05 03:32:49
Total Disks = 2, ausize=1048576, blksize=4096
 disk     size block disk name        disk group   disk path
===== ======== ===== ================ ============ ========================
    0     3137  4096 DATA_0000        DATA         /dev/sdb1
    1     1976  4096 DATA_0001        DATA         /dev/sdb2
f1b1disk = 0, f1b1au = 2, score=256, compat=0x0a100000
file=(1,0,2,1), disk=(2,0,2,2), alias=(6,0,2,6)
2012-03-05 03:32:49
ASM> alias
2012-03-05 03:33:15
    file          inc parent fstblk flag  refer alias
======== ============ ====== ====== ==== ====== ==========================
      -1           -1      0      0    4      1 DEVDB
      -1           -1      0      1    4      2 DATAFILE
      -1           -1      0      1    4      3 CONTROLFILE
      -1           -1      0      1    4      4 ONLINELOG
      -1           -1      0      1    4      5 TEMPFILE
      -1           -1      0      1    4      6 PARAMETERFILE
      -1           -1      0      1    4      7 ARCHIVELOG
     256    776817753      1      2   18     -1 SYSTEM
     257    776817753      1      2   18     -1 SYSAUX
     258    776817753      1      2   18     -1 UNDOTBS1
     259    776817753      1      2   18     -1 USERS
     267    776817909      1      2   18     -1 UNDOTBS2
     261    776817829      1      3   18     -1 Current
     260    776817829      1      3   18     -1 Current
     262    776817833      1      4   18     -1 group_1
     263    776817835      1      4   18     -1 group_1
     264    776817837      1      4   18     -1 group_2
     265    776817841      1      4   18     -1 group_2
     268    776818011      1      4   18     -1 group_3
     269    776818013      1      4   18     -1 group_3
     270    776818015      1      4   18     -1 group_4
     271    776818017      1      4   18     -1 group_4
     266    776817859      1      5   18     -1 TEMP
     272    776818021      1      6   18     -1 spfile
      -1           -1      1      7    4      8 2012_03_01
      -1           -1      1      7    4      9 2012_03_03
      -1           -1      1      7    4     10 2012_03_05
     273    776818747      7      8   18     -1 thread_1_seq_2
     274    776819571      7      8   18     -1 thread_1_seq_3
     275    776819721      7      8   18     -1 thread_2_seq_1
     276    776820331      7      8   18     -1 thread_1_seq_4
     277    776992321      7      9   18     -1 thread_1_seq_5
     278    776992321      7      9   18     -1 thread_2_seq_2
     279    777089437      7     10   18     -1 thread_1_seq_6
2012-03-05 03:33:15
ASM> fmap file 259
2012-03-05 03:34:49
fid=259, disk=0, au=56, block=3
siz=5, ecnt=6, eeof=6, flags=17, type=12, alias=109,-1
     extid  disk       auid  cnt flg chk
========== ===== ========== ==== === ===
         1     0        428    1   0 135
         2     1        408    1   0 178
         3     0        429    1   0 134
         4     1        409    1   0 179
         5     0        430    1   0 133
         6     1        410    1   0 176
2012-03-05 03:34:49
ASM> copy file 259 to user_259
2012-03-05 03:35:01
2012-03-05 03:35:01
[root@rac1 oracle]# ll
total 6252
-rwxr-xr-x  1 root   root       84619 Mar  6  2012 aulasm_linux32_trial.bin
-rw-r--r--  1 root   root          20 Mar  5 02:52 disk
-rw-r--r--  1 root   root     6291456 Mar  5 03:35 user_259
[root@rac1 oracle]# su - oracle
rac1-> dbv file=user_259
DBVERIFY: Release 10.2.0.1.0 - Production on Mon Mar 5 03:35:27 2012
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
DBVERIFY - Verification starting : FILE = user_259
DBVERIFY - Verification complete
Total Pages Examined         : 640
Total Pages Processed (Data) : 15
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 2
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 26
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 597
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 442362 (0.442362)

通过dbv验证,两个数据文件(文件大小有出入,可能在对待未格式化数据块上,两者处理有出入)可以基本上确定一致,证明两种方法都可以从异常的ASM中获取数据文件

三.总结与比较
1.AMDU完全免费,aulasm超过16块硬盘需要收费
2.AMDU操作相对于aulasm麻烦一点,但是整体还是可以接受
3.获取到asm中的数据文件后恢复数据内容或者open库,那都是容易的事情
如果因为磁盘头损坏太多或者其他原因,以上方法都不能实现抽取asm磁盘组中数据文件,参考:asm disk header 彻底损坏恢复

expdp中PARALLEL和DUMPFILE关系

今天有朋友在说PARALLEL设置为n(大于1),DUMPFILE为一个文件(不包含%U),会出现什么样的情况。下面通过实验来说明这个问题
1.当并发数比较大时(这里实验使用4),expdp会报ORA-39095错误

[oracle@node1 tmp]$ expdp chf/xifenfei DUMPFILE=xifenfei.dmp TABLES=t1 PARALLEL=4
Export: Release 11.2.0.3.0 - Production on Wed Mar 7 20:48:52 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Starting "CHF"."SYS_EXPORT_TABLE_01":  chf/******** DUMPFILE=xifenfei.dmp TABLES=t1 PARALLEL=4
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5.362 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes

2.并发数较少时(实验为2),通过attach观察
2.1)执行expdp导出

[oracle@node1 tmp]$ expdp chf/xifenfei DUMPFILE=xifenfei.dmp TABLES=t1 PARALLEL=2
Export: Release 11.2.0.3.0 - Production on Wed Mar 7 20:49:15 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Starting "CHF"."SYS_EXPORT_TABLE_02":  chf/******** DUMPFILE=xifenfei.dmp TABLES=t1 PARALLEL=2
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5.362 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported "CHF"."T1"                                  4.647 GB 51080958 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "CHF"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for CHF.SYS_EXPORT_TABLE_02 is:
  /opt/oracle/diag/rdbms/chf/chf/trace/xifenfei.dmp
Job "CHF"."SYS_EXPORT_TABLE_02" successfully completed at 20:52:23

2.2)在导出过程中,使用attach观察expdp工作情况

[oracle@node1 trace]$ expdp chf/xifenfei attach=SYS_EXPORT_TABLE_02
Export: Release 11.2.0.3.0 - Production on Wed Mar 7 20:49:38 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Job: SYS_EXPORT_TABLE_02
  Owner: CHF
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: BAA7964815021C96E0438C09A8C04319
  Start Time: Wednesday, 07 March, 2012 20:49:16
  Mode: TABLE
  Instance: chf
  Max Parallelism: 2
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        chf/******** DUMPFILE=xifenfei.dmp TABLES=t1 PARALLEL=2
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 2
  Job Error Count: 0
  Dump File: /opt/oracle/diag/rdbms/chf/chf/trace/xifenfei.dmp
    bytes written: 4,096
Worker 1 Status:
  Process Name: DW00
  State: EXECUTING
  Object Schema: CHF
  Object Name: T1
  Object Type: TABLE_EXPORT/TABLE/TABLE
  Completed Objects: 1
  Total Objects: 1
  Worker Parallelism: 1
Worker 2 Status:
  Process Name: DW02
  State: EXECUTING
  Object Schema: CHF
  Object Name: T1
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 1
  Total Objects: 1
  Completed Rows: 16,104,279
  Worker Parallelism: 1
Export> status
Job: SYS_EXPORT_TABLE_02
  Operation: EXPORT
  Mode: TABLE
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 2
  Job Error Count: 0
  Dump File: /opt/oracle/diag/rdbms/chf/chf/trace/xifenfei.dmp
    bytes written: 4,096
Worker 1 Status:
  Process Name: DW00
  State: EXECUTING
  Object Schema: CHF
  Object Name: T1
  Object Type: TABLE_EXPORT/TABLE/TABLE
  Completed Objects: 1
  Total Objects: 1
  Worker Parallelism: 1
Worker 2 Status:
  Process Name: DW02
  State: EXECUTING
  Object Schema: CHF
  Object Name: T1
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 1
  Total Objects: 1
  Completed Rows: 35,425,534
  Worker Parallelism: 1
Export> status
Job: SYS_EXPORT_TABLE_02
  Operation: EXPORT
  Mode: TABLE
  State: COMPLETING
  Bytes Processed: 4,989,989,105
  Percent Done: 100
  Current Parallelism: 2
  Job Error Count: 0
  Dump File: /opt/oracle/diag/rdbms/chf/chf/trace/xifenfei.dmp
    bytes written: 4,990,009,344
Worker 1 Status:
  Process Name: DW00
  State: WORK WAITING
Worker 2 Status:
  Process Name: DW02
  State: WORK WAITING

3.最终结论
3.1)如果并发设置过大,在导出过程中直接报错
3.2)如果导出文件数量少于并发数时,多于并发将不会工作。

LNNVL函数使用

LNNVL官方解释
LNNVL provides a concise way to evaluate a condition when one or both operands of the condition may be null. The function can be used only in the WHERE clause of a query. It takes as an argument a condition and returns TRUE if the condition is FALSE or UNKNOWN and FALSE if the condition is TRUE. LNNVL can be used anywhere a scalar expression can appear, even in contexts where the IS [NOT] NULL, AND, or OR conditions are not valid but would otherwise be required to account for potential nulls. Oracle Database sometimes uses the LNNVL function internally in this way to rewrite NOT IN conditions as NOT EXISTS conditions. In such cases, output from EXPLAIN PLAN shows this operation in the plan table output. The condition can evaluate any scalar values but cannot be a compound condition containing AND, OR, or BETWEEN.

LNNVL官方解释翻译
lnnvl用于某个语句的where子句中的条件,如果条件为true就返回false;如果条件为UNKNOWN或者false就返回true。该函数不能用于复合条件如AND, OR, or BETWEEN中。

模拟测试环境

SQL> create table xifenfei(name varchar2(20),year number);
Table created.
SQL> insert into xifenfei values('xifenfei2001',2001);
1 row created.
SQL> insert into xifenfei values('xifenfei2002',2002);
1 row created.
SQL> insert into xifenfei values('xifenfei2003',2003);
1 row created.
SQL> insert into xifenfei values('xifenfei2004',2004);
1 row created.
SQL> insert into xifenfei values('xifenfei2005',2005);
1 row created.
SQL> insert into xifenfei values('xifenfei2006',2006);
1 row created.
SQL> insert into xifenfei values('xifenfei2007',2007);
1 row created.
SQL> insert into xifenfei values('xifenfei2008',null);
1 row created.
SQL> insert into xifenfei values('xifenfei2009',2009);
1 row created.
SQL> insert into xifenfei values('xifenfei2010',2010);
1 row created.
SQL> insert into xifenfei values('xifenfei2011',2011);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from xifenfei;
NAME                       YEAR
-------------------- ----------
xifenfei2001               2001
xifenfei2002               2002
xifenfei2003               2003
xifenfei2004               2004
xifenfei2005               2005
xifenfei2006               2006
xifenfei2007               2007
xifenfei2008
xifenfei2009               2009
xifenfei2010               2010
xifenfei2011               2011
11 rows selected.

几种情况测试说明

--年份小于2009(lnnvl表示年份大于或者2009包含null)
SQL> select * from xifenfei where lnnvl(year<2009);
NAME                       YEAR
-------------------- ----------
xifenfei2008
xifenfei2009               2009
xifenfei2010               2010
xifenfei2011               2011
--year不为null(lnnvl表示年份为null)
SQL> select * from xifenfei where lnnvl(year is not null);
NAME                       YEAR
-------------------- ----------
xifenfei2008
--年份为null(lnnvl表示年份不为null)
SQL> select * from xifenfei where lnnvl(year is  null);
NAME                       YEAR
-------------------- ----------
xifenfei2001               2001
xifenfei2002               2002
xifenfei2003               2003
xifenfei2004               2004
xifenfei2005               2005
xifenfei2006               2006
xifenfei2007               2007
xifenfei2009               2009
xifenfei2010               2010
xifenfei2011               2011
10 rows selected.
--年份为12345(lnnvl表示年份不为12345)
SQL> select * from xifenfei where lnnvl(year =12345);
NAME                       YEAR
-------------------- ----------
xifenfei2001               2001
xifenfei2002               2002
xifenfei2003               2003
xifenfei2004               2004
xifenfei2005               2005
xifenfei2006               2006
xifenfei2007               2007
xifenfei2008
xifenfei2009               2009
xifenfei2010               2010
xifenfei2011               2011
11 rows selected.
--年份不为12345(lnnvl表示年份为12345或者null)
SQL> select * from xifenfei where lnnvl(year !=12345);
NAME                       YEAR
-------------------- ----------
xifenfei2008