最近看到同事的一个案例,觉得很有趣 现在测试下:
SQL> select * from v$version;
BANNER
—————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – Prod
PL/SQL Release 10.2.0.4.0 – Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 – Production
NLSRTL Version 10.2.0.4.0 – Production
首先创建表空间again 分别对应两个datafile again01 again02 ,创建表c1 c2
SQL> create tablespace again datafile ‘/u01/again01.dtf’ size 1M;
Tablespace created.
SQL> drop table c1;
drop table c1
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> create table c1 (id int);
Table created.
SQL> drop table c1;
Table dropped.
SQL> create table c1 (id int) tablespace again;
Table created.
SQL> begin
2 for i in 1..5 loop
3 insert into c1 values (i);
4 commit;
5 end loop ;
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> alter tablespace again add datafile ‘/u01/again02.dtf’ size 1M;
Tablespace altered.
SQL> create table c2 (id int) tablespace again;
Table created.
SQL> begin
2 for i in 1..100000 loop
3 insert into c2 values (i);
4 commit;
5 end loop;
6 end;
7 /
begin
*
ERROR at line 1:
ORA-01653: unable to extend table SYS.C2 by 128 in tablespace AGAIN
ORA-06512: at line 3
SQL> select count (*) from c2;
COUNT(*)
———-
77880
SQL> col segment_name for a30
SQL> /
FILE_ID SEGMENT_NAME BLOCK_ID BLOCKS
———- —————————— ———- ———-
8 C1 9 8
8 C2 17 8
8 C2 25 8
8 C2 33 8
8 C2 41 8
8 C2 49 8
8 C2 57 8
8 C2 65 8
8 C2 73 8
8 C2 81 8
8 C2 89 8
FILE_ID SEGMENT_NAME BLOCK_ID BLOCKS
———- —————————— ———- ———-
8 C2 97 8
8 C2 105 8
8 C2 113 8
8 C2 121 8
9 C2 9 8
9 C2 17 8
17 rows selected.
SQL> select data_object_id from dba_objects where object_name=’C2′;
DATA_OBJECT_ID
———-
52039
SQL> select dbms_rowid.rowid_create(1,52039,8,129,1) from dual;
DBMS_ROWID.ROWID_C
——————
AAAMtHAAIAAAACBAAB
使用dbms_rowid.rowid_create 构建c2的rowid 129=max(block_id)+blocks
SQL> alter database datafile 9 offline;
使用datafile offline 模拟datafile丢失
SQL> select count (*) from c2;
select count (*) from c2
*
ERROR at line 1:
ORA-00376: file 9 cannot be read at this time
ORA-01110: data file 9: ‘/u01/again02.dtf’
SQL> select /*+rowid (c2) */ count (*) from c2 where rowid<'AAAMtHAAIAAAACBAAB'; COUNT(*) ---------- 67980 通过对datafile 8最后一个block的rowid的范围指定 强制sql扫描可用块 --------------------------------------------------------------------------------------------------------------------------- 测试中出现另一种情况: SQL> select owner from dba_tables where table_name=’MYTABLE’;
OWNER
——————————
SYS
SQL> set line 200
SQL> /
FILE_ID BLOCK_ID BLOCKS SEGMENT_NAME
———- ———- ———- ———————————————————————————
5 9 8 MYTABLE
5 137 128 MYTABLE
5 265 128 MYTABLE
5 393 128 MYTABLE
5 521 128 MYTABLE
5 649 128 MYTABLE
6 9 8 MYTABLE
6 17 8 MYTABLE
6 25 8 MYTABLE
6 33 8 MYTABLE
6 41 8 MYTABLE
FILE_ID BLOCK_ID BLOCKS SEGMENT_NAME
———- ———- ———- ———————————————————————————
6 49 8 MYTABLE
6 57 8 MYTABLE
6 65 8 MYTABLE
6 73 8 MYTABLE
6 81 8 MYTABLE
6 89 8 MYTABLE
6 97 8 MYTABLE
6 105 8 MYTABLE
6 113 8 MYTABLE
6 121 8 MYTABLE
6 137 128 MYTABLE
FILE_ID BLOCK_ID BLOCKS SEGMENT_NAME
———- ———- ———- ———————————————————————————
6 265 128 MYTABLE
6 393 128 MYTABLE
6 521 128 MYTABLE
6 649 128 MYTABLE
6 777 128 MYTABLE
6 905 128 MYTABLE
7 9 128 MYTABLE
7 137 128 MYTABLE
7 265 128 MYTABLE
7 393 128 MYTABLE
7 521 128 MYTABLE
33 rows selected.
SQL> select data_object_id from dba_objects where object_name=’MYTABLE’;
DATA_OBJECT_ID
———-
19617
SQL> select max(block_id +blocks) from dba_extents where segment_name=’MYTABLE’ and file_id=5;
MAX(BLOCK_ID+BLOCKS)
——————–
777
SQL> select dbms_rowid.rowid_create(1,19617,5,777,1) from dual;
DBMS_ROWID.ROWID_C
——————
AAAEyhAAFAAAAMJAAB
SQL> alter database datafile 6 offline;
Database altered.
SQL> select /*+rowid (mytable) */ count (*) from mytable where rowid<'AAAEyhAAFAAAAMJAAB';
select /*+rowid (mytable) */ count (*) from mytable where rowid<'AAAEyhAAFAAAAMJAAB'
*
ERROR at line 1:
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: '/u01/oradata/MYDB/datafile/o1_mf_mytbs_7915sqh7_.dbf'
SQL> set autot trace exp;
SQL> select /*+rowid (mytable) */ count (*) from mytable where rowid<'AAAEyhAAFAAAAMJAAB';
Execution Plan
----------------------------------------------------------
Plan hash value: 912055137
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 615 (1)| 00:00:08 |
| 1 | SORT AGGREGATE | | 1 | 12 | | |
|* 2 | TABLE ACCESS BY ROWID RANGE| MYTABLE | 4 | 48 | 615 (1)| 00:00:08 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(ROWID<'AAAEyhAAFAAAAMJAAB')
SQL> recover datafile 6;
Media recovery complete.
SQL> alter database datafile 6 online;
Database altered.
SQL> select /*+rowid (mytable) */ count (*) from mytable where rowid<'AAAEyhAAFAAAAMJAAB'; Execution Plan ---------------------------------------------------------- Plan hash value: 912055137 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 12 | 616 (1)| 00:00:08 | | 1 | SORT AGGREGATE | | 1 | 12 | | | |* 2 | TABLE ACCESS BY ROWID RANGE| MYTABLE | 72822 | 853K| 616 (1)| 00:00:08 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(ROWID<'AAAEyhAAFAAAAMJAAB') Note ----- - dynamic sampling used for this statement SQL> alter database datafile 6 offline;
Database altered.
SQL> select /*+rowid (mytable) */ count (*) from mytable where rowid<'AAAEyhAAFAAAAMJAAB';
Execution Plan
----------------------------------------------------------
Plan hash value: 912055137
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 615 (1)| 00:00:08 |
| 1 | SORT AGGREGATE | | 1 | 12 | | |
|* 2 | TABLE ACCESS BY ROWID RANGE| MYTABLE | 4 | 48 | 615 (1)| 00:00:08 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(ROWID<'AAAEyhAAFAAAAMJAAB')
SQL> set autot off
SQL> /
COUNT(*)
———-
49509
直接offline之后 使用rowid强制扫描 还是回去扫描offline的datafile 再次online 执行该sql 再次offline 成功扫描出记录
起初怀疑是dynamic sampling影响 设置optimizer_dynamic_sampling=0 仍然产生类似情况 后续将跟进
针对此次试验,如果datafile中存在多张表,我们可以使用sqlplus copy 将表直接转移到测试库 再通过impdp导入到生产库
提供scripts参考:
select ‘copy from system/oracle@css to system/oracle@hong create ‘ ||
object_name || ‘ using select /*+rowid(‘ || object_name ||
‘) */ * from ‘ || OWNER || ‘.’ || object_name ||
‘ where rowid
‘,10,1,1 ) ;’
from dba_objects
where object_name in (select distinct segment_name
from dba_extents
where file_id = 9
and owner = ‘SYS’)
产生类似脚本:
copy from system/oracle@css to system/oracle@hong create C2 using select /*+rowid(C2) */ * from SYS.C2 where
rowid
continuing…
We do not ordinarily comment but I gotta say regards for your post on this 1.
Great post. I appreciate you taking the time to write this. I bookmarked your blog. Thanks
I want to voice my appreciation for your generosity for men and women that really need help with the subject. Your personal dedication to passing the solution up and down ended up being remarkably significant and has continually allowed associates much like me to achieve their goals. Your personal useful publication denotes so much to me and a whole lot more to my office colleagues. Many thanks; from each one of us.
Woh I love your articles , saved to bookmarks ! Ill make sure to put this in good use! Tweeted for the nice site
Hey, properly done! I adore that which you write, so I will carry on subsequent your blog.Go on with your style!