Oracle RAC10g UNKNOWN解决

1、表现出来的现象
rac2-> crs_stat -t
Name Type Target State Host
————————————————————
ora.rac1.gsd application ONLINE UNKNOWN rac1
ora.rac1.ons application ONLINE UNKNOWN rac1
ora.rac1.vip application ONLINE UNKNOWN rac1
ora.rac2.gsd application ONLINE UNKNOWN rac2
ora.rac2.ons application ONLINE UNKNOWN rac2
ora.rac2.vip application ONLINE UNKNOWN rac1
可以各种原因导致服务呈现unknown 状态,比如两个节点时间差太多,比如1分钟,可能导致这个异常
2、解决方法
因为UNKNOWN不能正常的被启动或者关闭(crs_start -all/crs_stop -all),所以解决方法有两种:
2.1)先禁用crs服务,后启用crs服务(root用户,两个节点上都要执行)
/etc/init.d/init.crs stop
/etc/init.d/init.crs start
2.2)单个服务关闭,后启动(oracle用户,一个节点上执行)
crs_stop ora.rac2.gsd(crs_stop -f ora.rac2.gsd强制关闭)
crs_start ora.rac2.gsd(crs_start -f ora.rac2.gsd强制开启)
该方法有变通,如一个个关闭服务,然后使用crs_start -all开启所有服务器

crs 10.2.0.1 bug(公网ip为私有ip时)

当crs10.2.0.1公用网卡IP段使用10、172、192段(非路由网段)的时候
1、会出现以下两种情况:
1)最后检查时出现如下错误
Checking existence of VIP node application (required)
Check failed.
Check failed on nodes:
rac1,rac2
2)在rac2上执行root.sh脚本最后
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
……
Timed out waiting for the CRS stack to start.
2、解决方法:
在root.sh在第二个节点执行完之后,以 root 用户身份在第二个节点上手动调用 VIPCA
# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca
配置虚拟ip

Oracle 10g RAC之配置时间同步(NTP)

所有节点的时间必须同步,这通常是通过配置NTP服务器实现的。如果用户的网络中已经有一台时间服务器,那么可以所有节点都指向它,否则可以从集群中挑选一个节点作为时间服务器,让其他节点与它同步。下面分别演示这两种方法:
(1)如果公司网络中已经有一台时间服务器
如:192.168.11.10是一台ntp时间服务器
此时NTP服务器的配置文件是/etc/ntp.conf,在每个节点编辑这个文件。
[root@cc-svr-a ~]# vi /etc/ntp.conf
编辑后的内容如下,首选公司的时间服务器。
server 192.168.11.10 prefer
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
(2)如果没有外部时间服务器
这时公司选择集群中某个节点作为时间服务器,NTP服务只需要很少的系统资源。假设选择主节点作为时间服务器,从节点向它同步,其配置方法如下。
编辑主节点的/etc/ntp.conf文件,编辑后的内容如下:
server 127.127.1.0
fudge 127.127.1.0 stratum 11
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
编辑从节点的/etc/ntp.conf文件,编辑后的内容如下:
server 192.168.11.12 prefer #注意192.168.11.12为主节点的IP地址
driftfile /var/lib/ntp/drift
broadcastdelay 0.008
配置完成后,启动NTP服务。
[root@cc-svr-a ~]# /etc/init.d/ntpd start
[root@cc-svr-a ~]# chkconfig –level 345 ntpd on
(3)查看执行情况(主从)
[root@cc-svr-a ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*CC-Node-02 LOCAL(0) 12 u 58 64 17 0.226 -22.698 29.261

Linux上NFS配置

一、rpm包准备
需要安装nfs-utils和portmap程序,使用rpm –q可以查看是否安装
rpm -q nfs-utils portmap
如果没有安装,需要使用下面命令安装(默认情况都是安装)
yum install nfs-utils
yum install portmap
二、Server端
1、编辑/etc/exports文件
设置共享/tmp/nfs目录给192.168.11.12的用户,具有读写权限
/tmp/nfs 192.168.11.12(rw,sync)
设置共享目录/mnt/sda4/share/a,仅192.168.23.129主机允许访问此共享目录,具有读写权限
/mnt/sda4/share/b 192.168.23.129(rw) *(ro)
设置共享目录/mnt/sda4/share/b,192.168.23.129可以读写该共享目录,其他主机只可以读取该共享目录
/mnt/sda4/share/d 192.168.23.0/24(rw)
设置共享目录/mnt/sda4/share/d,仅有192.168.23.0/24网段的主机才可访问和读写此目录文件
说明:
Rw:read-write可读写的权限
Ro:read-only只读权限
Sync:数据同步写入到内存与硬盘中
Async:数据先暂存于内存中,而非直接写入硬盘
2.启动portmap服务:
service portmap start[restart]
3.启动NFS服务:
service nfs start[restart]
3、exportfs的用法
如果我们修改了/etc/exports后,并不需要重启nfs服务,只要用exportfs重新扫描一次/etc/exports,并且重新加载即可
语法: exportfs [-aruv]
-a: 全部挂载(或卸载) /etc/exports档案内的设定
-r: 重新挂载/etc/exports里面的设定,也同步的更新/etc/exports和/var/lib/nfs/xtab里面的内容
-u:卸载某一目录
-v:在export的时候,将分享的目录显示到荧屏上.
#exportfs -rva //重新export一次
#exportfs -auv //全部卸载
4、补充说明:
4.1)启动NFS只需启动portmap和NFS服务即可。如果需要启动数据一致性检查,则需启动nfslock服务。
使用netstat–tnlu可以查看nfs开放了哪些端口。Portmap的端口为111,nfs的端口为2049
[root@CC-Node-01 nfs]# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 46231 nlockmgr
100021 3 udp 46231 nlockmgr
100021 4 udp 46231 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 tcp 46322 nlockmgr
100021 3 tcp 46322 nlockmgr
100021 4 tcp 46322 nlockmgr
4.2)如果portmap和nfs服务都是正常运行,修改了/etc/exports中的配置,只需要exportfs -rva 重新加载一次即可
三、Client端
1.启动portmap服务:
service portmap start[restart]
2.挂载服务器端的共享目录(假设服务器端192.168.11.11):
[root@ECP-UC-DB1 tmp]# mount -t nfs 192.168.11.11:/tmp/nfs /tmp/nfs
[root@ECP-UC-DB1 nfs]# df -h
192.168.11.11:/tmp/nfs
19G 15G 3.6G 81% /tmp/nfs
3、修改nfs
[root@ECP-UC-DB1 tmp]# umount /tmp/nfs
4、设置成开机启动挂载
修改/etc/fstab文件
192.168.11.11:/tmp/nfs /tmp/nfs nfs defaults,rw 0 0
5、补充说明:
一开始配置完Server,Client端mount后只能读,写不了,在server上加上wx权限后问题就解决了

select max(id),min(id) from table优化

1、查看数据库版本
SQL> select * from v$version where rownum<2; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production 2、创建表和索引
create table t_a as select * from dba_objects;
create index t_a_ind on t_a(object_id);
3、查询最大值
SQL> select max(object_id) from t_a;
执行计划
———————————————————-
Plan hash value: 3226265922
————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | INDEX FULL SCAN (MIN/MAX)| T_A_IND | 1 | 13 | 2 (0)| 00:00:01 |
————————————————————————————–
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
431 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
4、查询最小值
SQL> select min(object_id) from t_a;
执行计划
———————————————————-
Plan hash value: 3226265922
————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | |
|
| 2 | INDEX FULL SCAN (MIN/MAX)| T_A_IND | 1 | 13 | 2 (0)| 00:00:01 |
————————————————————————————–
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
429 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
5、查询最大值和最小值
SQL> select max(object_id),min(object_id) from t_a;
执行计划
———————————————————-
Plan hash value: 2127980459
—————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————
| 0 | SELECT STATEMENT | | 1 | 13 | 293 (1)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | TABLE ACCESS FULL| T_A | 78093 | 991K| 293 (1)| 00:00:04 |
—————————————————————————
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
4 recursive calls
0 db block gets
1119 consistent gets
1044 physical reads
0 redo size
502 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
6、查询最大值和最小值(排除null)
SQL> select max(object_id),min(object_id) from t_a where object_id is not null;
执行计划
———————————————————-
Plan hash value: 1214261695
———————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
———————————————————————————
| 0 | SELECT STATEMENT | | 1 | 13 | 50 (2)| 00:00:01
|
| 1 | SORT AGGREGATE | | 1 | 13 | |
|
|* 2 | INDEX FAST FULL SCAN| T_A_IND | 78093 | 991K| 50 (2)| 00:00:01
|
———————————————————————————
Predicate Information (identified by operation id):
—————————————————
2 – filter(“OBJECT_ID” IS NOT NULL)
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
4 recursive calls
0 db block gets
242 consistent gets
0 physical reads
0 redo size
502 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
7、分别查询最大值和最小值
SQL> select (select max(object_id) from t_a) max,(select min(object_id) from t_a) min from dual;
执行计划
———————————————————-
Plan hash value: 312201770
————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | |
| 2 | INDEX FULL SCAN (MIN/MAX)| T_A_IND | 1 | 13 | 2 (0)| 00:00:01 |
| 3 | SORT AGGREGATE | | 1 | 13 | |
| 4 | INDEX FULL SCAN (MIN/MAX)| T_A_IND | 1 | 13 | 2 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
————————————————————————————–
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
480 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
说明:
1、单查询最大值或者最小值,sql会自动走index
2、如果同时查询最大值和最小值,sql会使用全表扫描,而不是我们想象的索引快速扫描
3、加上where 排除掉null的情况,sql使用索引快速扫描,原因是:在不能确定索引列不为null(或者没有排除掉null)的情况下,不会使用索引快速扫描,而sql为了保证正确而采用了全表扫描
4、INDEX FULL SCAN (MIN/MAX)扫描效率很搞,所以把最大值,最小值分开查询,提高执行效率
5、其他写法
SQL> select (select /*+ index_asc(t_a t_a_ind) */ object_id from t_a where rownu
m=1) min ,(select /*+ index_desc(t_a t_a_ind) */ object_id from t_a where rownum=1)
max from dual;
执行计划
———————————————————-
Plan hash value: 674626822
—————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————
| 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:
00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | INDEX FULL SCAN | T_A_IND | 78093 | 991K| 2 (0)| 00:
00:01 |
|* 3 | COUNT STOPKEY | | | | |
| 4 | INDEX FULL SCAN DESCENDING| T_A_IND | 78093 | 991K| 2 (0)| 00:
00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
—————————————————————————————
Predicate Information (identified by operation id):
—————————————————
1 – filter(ROWNUM=1)
3 – filter(ROWNUM=1)
Note—— dynamic sampling used for this statement (level=2)
统计信息
———————————————————-
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
480 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
注意:
1)这个要正确执行,需要一个前提条件object_id这列要为not null限制条件,不然会hint提示无效
2)从执行计划的统计信息上看,这个和INDEX FULL SCAN (MIN/MAX)方式的执行效率一样

statspack安装

今天发现关于statspack的安装没有记录下来,现在记录下,下次需要用的时候直接查找blog,而不用在硬盘中查找相关安装配置文件
1、创建statspack专用表空间
CREATE TABLESPACE SP
DATAFILE
‘/opt/oracle/oradata/usercent/PS_1.001.dbf’ SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 5G,
‘/opt/oracle/oradata/usercent/PS_1.002.dbf’ SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 5G,
‘/opt/oracle/oradata/usercent/PS_1.003.dbf’ SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 5G
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO;
2、检测系统中是否有PERFSTAT用户
如果有,需要查询所跑的业务,做一些处理,然后删除
select * from dba_users where username=’PERFSTAT’;
drop user PERFSTAT cascade;
3、检查job_queue_processes
SQL> show parameter job
NAME TYPE VALUE
———————————— ———– ——————————
job_queue_processes integer 6
如果为0,请使用
alter system set job_queue_processes=6;
进行修改,当然数值不一定是6
4、检查timed_statistics
SQL> show parameter timed_statistics
NAME TYPE VALUE
———————————— ———– ——————————
timed_statistics boolean TRUE
如果timed_statistics为false,请使用下面命令修改为true
alter system set timed_statistics=true;
4、安装statspack
4.1)使用sys用户登录数据库
4.2)安装statspack主体程序
执行@?/rdbms/admin/spcreate.sql
如果失败,请执行@?/rdbms/admin/spdrop.sql,然后重新安装
4.3)设置自动采集信息
执行@?/rdbms/admin/spauto.sql
可以通过vi修改采集时间频率

GREATEST/LEAST函数

GREATEST
greatest( expr1, expr2, … expr_n )
expr1, expr2, . expr_n 可以是值也可以是函数.
函数功能: 取得值最大值
SQL> SELECT GREATEST(10,21,3,15,18) FROM dual;
GREATEST(10,21,3,15,18)
———————–
21
SQL> SELECT GREATEST(‘abc’,’bca’,’cba’,’bac’,’cabb’,’cbaa’) FROM dual;
GREATEST(‘ABC’,’BCA’,’CBA’,’BA
——————————
cbaa
LEAST
least( expr1, expr2, … expr_n )
expr1, expr2, . expr_n 可以是值也可以是函数.
函数功能: 取得值最小值
SQL> SELECT least(10,21,3,15,18) FROM dual;
LEAST(10,21,3,15,18)
——————–
3
SQL> SELECT least(‘abc’,’bca’,’cba’,’a’,’bac’,’cabb’,’cbaa’) FROM dual;
LEAST(‘ABC’,’BCA’,’CBA’,’A’,’B
——————————
a

oracle 10g flashback

一、oracle falshback drop
利用flashback drop oracle10g可以对DDL操作进行恢复,oracla提供类似回收站的recyclebin来收集被删除的对象,其实对象在删除的时候oracle把对象写到一个数据字典表中,当用户不需要该对象的时候,可以利用purge命令来从回收站进行清除
select object_name,droptime,dropscn,purge_object,ORIGINAL_NAME from recyclebin;
flashback table f_drop to before drop;
清空回收站:
1)清空一张表:purge table “BIN$N+i42FTvSSemvMrH6frCQg==$0″/table_name;
2)清空一个index:purge index index_name;
2)清空所有对象:PURGE recyclebin;
二、oracle falshback table
对于误drop的table此可以使用本操作,还原drop的table
select t_odu.*,ora_rowscn from a;–查询每条记录对应scn
select dbms_flashback.get_system_change_number from dual;–系统当前scn
alter table t_odu enable row movement;–table row movement
flashback table t_odu to scn 1831189;–基于scn恢复
flashback table t_odu to timestamp to_timestamp(‘2011-6-27 11:45:20′,’yyyy-mm-dd hh24:mi:ss’);–基于时间点
scn补充:
可以通过select ora_rowscn from table 得到每一条结果集当前的SCN,timestamp_to_scn()将scn转换到stmestamp;scn_to_timestamp()将timestamp转换到scn。
三、oracle falshback version query
racle10g falshback 能将所有做了提交的行进行记录,就类似于审计的功能,通过falshback可以查询什么时候执行了什么操作,非常方便,包括闪回版本的查询和审计等
select COUNT(*) from t_query as of scn 1831544;
SELECT COUNT(*) FROM t_query as of timestamp to_timestamp(‘2011-07-21 14:58:00′,’yyyy-mm-dd hh24:mi:ss’);
四、oracle falshback transaction query
回闪事务功能提供对过去某段时间内所完成的事务的查询和撤销
SELECT * FROM flashback_transaction_query a WHERE a.table_name=’T_QUERY’;
补充:
UNDO_SQL 就是当时对表T_QUERY的逆向操作语句

performing DML/DDL operation over object in bin错误模拟

1、alert文件中现象
Thu Jul 21 09:49:38 2011
performing DML/DDL operation over object in bin.
Thu Jul 21 09:51:02 2011
performing DML/DDL operation over object in bin.
2、开始模拟
1)确认回收站功能启用
SQL> show parameter recyclebin;
NAME TYPE VALUE
———————————— ———– ——————————
recyclebin string ON
如果是OFF,使用alter system set recyclebin=on;开启回收站功能
2)创建和删除表
SQL> create table t_drop
2 as
3 select * from tab;
Table created.
SQL> drop table t_drop;
Table dropped.
SQL> show recyclebin;
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
—————- —————————— ———— ——————-
T_DROP BIN$qIuF4JHvDJDgQKjADAsarA==$0 TABLE 2011-07-21:09:46:52
3)查看回收站表中数据
SQL> select * from “BIN$qIuF4JHvDJDgQKjADAsarA==$0”;
TNAME TABTYPE CLUSTERID
—————————— ——- ———-
T TABLE
TAB_CRM_CLIENT TABLE
TEST_COUNT TABLE
V_TEST VIEW
TEST TABLE
T_DROP TABLE
TAB_OLCM TABLE
7 rows selected.
4)对回收站中数据进行dml操作
SQL> delete from “BIN$qIuF4JHvDJDgQKjADAsarA==$0”;
delete from “BIN$qIuF4JHvDJDgQKjADAsarA==$0”
*
ERROR at line 1:
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
5)对回收站中数据进行ddl操作
SQL> drop table “BIN$qIuF4JHvDJDgQKjADAsarA==$0”;
drop table “BIN$qIuF4JHvDJDgQKjADAsarA==$0”
*
ERROR at line 1:
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
6)查看回收站,错误重现
Thu Jul 21 09:56:44 2011
performing DML/DDL operation over object in bin.
Thu Jul 21 09:57:19 2011
performing DML/DDL operation over object in bin.
7)说明
由于对回收站中的对象执行了dml或者ddl操作导致alert报类此做。在回收站中不能执行ddl或者dml操作

innobackupex增量备份测试

1、全备
/opt/mysql/product/5.5/bin/innobackupex –defaults-file=/etc/my.cnf –no-timestamp –socket=/var/run/mysqld/mysqld.sock –user=root –password=passw0rd /opt/mysql/mysql_bak/full
2、修改库中数据
mysql> create database fei;
Query OK, 1 row affected (0.03 sec)
mysql> use fei;
Database changed
mysql> create table t_1 as select * from information_schema.tables;
Query OK, 83 rows affected (0.20 sec)
Records: 83 Duplicates: 0 Warnings: 0
mysql> create table t_2 as select * from information_schema.tables;
Query OK, 84 rows affected (0.03 sec)
Records: 84 Duplicates: 0 Warnings: 0
mysql> create table t_3 as select * from information_schema.tables;
Query OK, 85 rows affected (0.04 sec)
Records: 85 Duplicates: 0 Warnings: 0
3、增量备份
/opt/mysql/product/5.5/bin/innobackupex –defaults-file=/etc/my.cnf –no-timestamp –socket=/var/run/mysqld/mysqld.sock –user=root –password=passw0rd –incremental –incremental-basedir=/opt/mysql/mysql_bak/full /opt/mysql/mysql_bak/inc
4、恢复全备
/opt/mysql/product/5.5/bin/innobackupex /opt/mysql/mysql_bak/full –apply-log
5、恢复增量备份
/opt/mysql/product/5.5/bin/innobackupex /opt/mysql/mysql_bak/full –incremental –incremental-dir=/opt/mysql/mysql_bak/inc –apply-log
6、还原数据库到默认目录
/opt/mysql/product/5.5/bin/innobackupex –defaults-file=/etc/my.cnf /opt/mysql/mysql_bak/full/ –copy-back
7、启动数据库测试
结论为:所有新创建的对象都没恢复成功
后来创建的库和表结构都不存在,在执行增量恢复的时候,有如下错误提示:
110719 11:49:57 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: error: cannot open /opt/mysql/mysql_bak/full/fei/t_2.ibd
xtrabackup: Error: xtrabackup_apply_delta() failed.
xtrabackup: page size for /opt/mysql/mysql_bak/inc/fei/t_3.ibd.delta is 16384 bytes
110719 11:49:57 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: error: cannot open /opt/mysql/mysql_bak/full/fei/t_3.ibd
xtrabackup: Error: xtrabackup_apply_delta() failed.
xtrabackup: page size for /opt/mysql/mysql_bak/inc/fei/t_1.ibd.delta is 16384 bytes
110719 11:49:57 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: error: cannot open /opt/mysql/mysql_bak/full/fei/t_1.ibd
xtrabackup: Error: xtrabackup_apply_delta() failed.
fei.*对象是在全备之后创建的,全备份集中并不存在,因此应用增量时就报了错,此时如果原始数据文件已经损坏,那么fei.*的数据就无法恢复了,因为缺少了其数据文件。innobackupex增量备份和恢复功能还不完善,有新增对象时会出错