Locations of visitors to this page

Tuesday, February 22, 2011

Distributed read consistency 分布式读一致性

Distributed read consistency 分布式读一致性

据文档Managing Read Consistency称,在分布式环境中,由于SCN不同步,会导致读出旧数据。比如:远端数据库更新了一个表的记录,并提交。然后,本地数据库通过数据库链接查询该表,得到的是更新前的数据,再次查询才得到更新后的数据。

多次试验后均未能重现文档所描述的现象,总是能读出最新提交的数据,原因不明。


外部链接:
Distributed Query Returns Different Results on Consecutive Runs when no Changes Exist in the Base Tables. [ID 561471.1]
Bug 6933259: DISTRIBUTED QUERY RETURNS WRONG RESULTS
Bug 611416: SELECT AFTER UPDATE AND COMMIT DOESN'T SHOW UPDATED COLUMNS



-fin-

Monday, February 21, 2011

Using dNFS clonedb for testing - 使用dNFS搭建测试数据库

Using dNFS clonedb for testing
使用dNFS搭建测试数据库


1. 配置dNFS

1.1. 挂载NFS
编辑/etc/fstab,如:
192.168.1.1:/dbak/xxods /home/oracle/xxodsnew    nfs     rw,bg,hard,intr,nolock,tcp,rsize=32768,wsize=32768,nocto,retry=480  0       0

1.2. 启用nfsodm库
将原odm库改名,并建为nfsodm库的符号连接,如:
ln -s libnfsodm11.so libodm11.so
重启数据库,告警日志显示"Oracle instance running with ODM",说明dNFS已启用
...
  pga_aggregate_target     = 1G
  dg_broker_start          = FALSE
  diagnostic_dest          = "/opt/oracle"
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0
Fri Dec 31 18:13:43 2010
...

1.3. 设置oradism可执行程序的属性
设置oradism可执行程序的属性setuid,执行
root.sh -silent
oradism可执行程序的属性变为:
-rwsr-x--- 1 root   oinstall     68432 Sep  4 22:21 oradism

如不设置,告警日志中将可能会报错"Direct NFS: please check that oradism is setuid",如:
...
space available in the underlying filesystem or ASM diskgroup.
Fri Dec 31 18:19:19 2010
alter database backup controlfile to '/home/oracle/xxodsnew/control_bak.ctl'
Direct NFS: please check that oradism is setuid
Fri Dec 31 18:20:19 2010
Direct NFS: please check that oradism is setuid
Fri Dec 31 18:21:19 2010
Direct NFS: please check that oradism is setuid
Fri Dec 31 18:22:19 2010
Direct NFS: please check that oradism is setuid
Fri Dec 31 18:23:20 2010
Completed: alter database backup controlfile to '/home/oracle/xxodsnew/control_bak.ctl'
Fri Dec 31 18:23:51 2010
Starting background process SMCO
Fri Dec 31 18:23:51 2010
SMCO started with pid=28, OS id=18071
...

1.4. 检查视图
v$dnfs_servers: dnfs服务器
v$dnfs_channels: dnfs网络路径
v$dnfs_files: dnfs文件
v$dnfs_stats: dnfs状态

如:
SQL> SQL> select * from v$dnfs_servers;

        ID
----------
SVRNAME
--------------------------------------------------------------------------------
DIRNAME
--------------------------------------------------------------------------------
   MNTPORT    NFSPORT      WTMAX      RTMAX
---------- ---------- ---------- ----------
         1
192.168.1.1
/dbak/xxods
       721       2049          0          0


外部链接:
Direct NFS: FAQ [ID 954425.1]
Step by Step - Configure Direct NFS Client (DNFS) on Linux [ID 762374.1]
3.2.3 Deciding to Use Direct NFS for Data Files

Mount Options for Oracle files when used with NAS devices [ID 359515.1]




2. 在dnfs上搭建测试库

使用了dnfs写时拷贝技术(copy-on-write),测试库只写入修改的数据块,从而减少了其占用空间。

主要步骤如下:
2.1. 生产机导出nfs目录,试机挂载nfs目录
2.2. 创建生产库数据文件备份,用rman image复制,操作系统复制,或存储快照。数据文件备份存放在nfs目录上。
2.2. 生产库控制文件建跟踪备份
2.4. 测试库修改控制文件,将数据文件路径改为nfs备份目录。修改参数文件一些相应参数
2.5. nomount方式启动测试库,建控制文件
2.6. 执行dbms_dnfs.clonedb_renamefile存储过程,将数据文件备份改名到测试目录下。实际上没有真的改名,备份是只读的,不会有任何改动
2.8. 恢复并打开测试数据库


外部链接:
Oracle Database 11g Direct NFS Clonedb Feature – Part I.
Oracle Database 11g Direct NFS Clonedb Feature – Part I (and a half).
Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2)
Clone your dNFS Production Database for Testing [ID 1210656.1]

NOTE 1210656.1 from MOS:

Clone your dNFS Production Database for Testing [ID 1210656.1]

--------------------------------------------------------------------------------

Modified 17-FEB-2011 Type ANNOUNCEMENT Status PUBLISHED

In this Document
What is being announced?
What do you need to do?



--------------------------------------------------------------------------------



Applies to:
Oracle Server - Enterprise Edition - Version: 11.1.0.6 to 11.2.0.1 - Release: 11.1 to 11.2
Information in this document applies to any platform.
Oracle RDBMS
dNFS
What is being announced?
Cloning a production database is an often needed procedure in customer environments to develop and test new application patches. When new OS releases, Storage software, Application version has to be installed in a production environment, it calls for a thorough testing using the production data. Today this is achieved by making a copy of the production datafiles in a test environment. In addition to the test environment, copies of the production database also made in development environment where application developers are creating their new applications and testing them. All of these require huge amount of storage space to be allocated and managed.

Database cloning process involves copying the entire production database files into the new environment where the development or testing takes place. In addition when customers have multiple test environments, the files need to be copied into each of these test environments. dNFS provides a new feature called clonedb which allows test databases to be cloned instantaneously. dNFS clonedb feature allows customers to set up their test database environment without really copying the production files into the new database environment. The files in the test database are created based on copy-on-write technology, so that only the blocks that are updated in the test database are ever written to the disk. This type of thin-provisioning reduces the amount of storage required for testing and development purposes. In addition to the reduced storage requirement, since the datafiles in the new test environment are not copied, setting up a test database is very quick and instantaneous.

Database cloning in a dNFS environment can be made from a rman full backup/image copy of all the relevant datafiles or from a storage snapshot. The database backup will be used as the backing store for the datafiles in the test database environment. Customers often take or required to take a full backup of their database and these files can be used to create the dnfs clonedb. By making the backup piece as the backing store the production data files are not touched by the test instances and the test instances don't compete for the same resources that the production database is using.
What do you need to do?
Following the few simple steps described below, one can clone a production database in a matter of few minutes. For the below example let us assume that your production database PROD1 is located at ORACLE_HOME=/u01/prod1/oracle. The rman backup of this production database is taken at RMAN_BACKUP_DIR=/u02/oracle/backup/prod1. The user is trying to clone a test database1 at ORACLE_HOME=/u03/test1/oracle and test database2 at ORACLE_HOME=/u03/test2/oracle


1.On your production database take a full rman hot/cold backup. If it is a hot backup make sure your production database is in archivelog mode and all the necessary archivelogs are saved and accessible to the new test database environment.
?To take a hot backup first put the database in hot backup mode.
alter database begin backup;
<Do a rman image copy/OS copy/Storage snapshot for each datafile>
alter database end backup;
?You can also run "backup as copy database format" rman command and get a copy of all your datafiles
Recovery Manager: Release 11.2.0.2.0 - Production on Wed Aug 18 10:25:35 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
RMAN-06568: connected to target database: PROD1 (DBID=1985124264, not open)
RMAN-06008: connected to recovery catalog database
RMAN> backup as copy database format '/u02/oracle/backup/prod1/%U' ;
RMAN-03090: Starting backup at AUG 18 2010 10:25:37
RMAN-08030: allocated channel: ORA_DISK_1
RMAN-08500: channel ORA_DISK_1: SID=110 device type=DISK
RMAN-08580: channel ORA_DISK_1: starting datafile copy
RMAN-08522: input datafile file number=00001 name=/u01/prod1/oracle/dbs/tbs_01.f
RMAN-08586: output file name=/u02/oracle/backup/prod1/data_D-PROD1_I-1985124264_TS-SYSTEM_FNO-1_01lll1d6 tag=TAG20100818T102541 RECID=1 STAMP=727352775
RMAN-08581: channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35

The above command creates a backup of all your datafiles in the location
"/u02/oracle/backup/prod1/"
2.Generate backup control file script from your production database by connecting with sysdba privileges and executing 'alter database backup controlfile to trace'. This will generate a trace file in your user_dump_dest directory which contains the create control file command.
3.Generate the testdb_create.sql using the script in the trace file.
SET ECHO ON
SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
STARTUP NOMOUNT PFILE='/u03/test1/oracle/dbs/testdb1.ora'
CREATE CONTROLFILE REUSE SET DATABASE "testdb1" RESETLOGS ARCHIVELOG
MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXINSTANCES 1
MAXLOGHISTORY 908 LOGFILE
GROUP 1 '/u03/test1/oracle/dbs/t_1alog.f' SIZE 100M BLOCKSIZE 512,
GROUP 2 '/u03/test1/oracle/dbs/t_2alog.f' SIZE 100M BLOCKSIZE 512
DATAFILE
' /u02/oracle/backup/prod1/data_D-PROD1_I-1985124264_TS-SYSTEM_FNO-1_01lll1d6',...CHARACTER SET WE8DEC? ;

make sure you change the name of the database to the test db name and the log directories are pointing to the new testdb environment and use the image copy of the datafiles.
4.Copy spfile from your production db to the testdb environment and change the relevant parameters. In your testdb environment run the testdb_create.sql to create testdb control file. Execute dbms_dnfs.clonedb_renamefile(backup_file_name, new_data_filename) on each datafile in your test database.

SQL> declare
2 begin
3 dbms_dnfs.clonedb_renamefile('/u02/oracle/backup/prod1/data_D-LV1_I-1985124264_TS-SYSTEM_FNO-1_01lll1d6' , 'test1_tbs_01.f');
4 end;
5.PL/SQL procedure successfully completed.
Do this for each datafile in your testdb environment.
6.If you created your test clone from a hotbackup do a recover database recover database using backup controlfile until cancel;
The above command will ask for the archivelogs for the period when the backup was taken and input those file names.
SQL> recover database using backup controlfile until cancel ;
7.Open the database and start running your tests

SQL> alter database open resetlogs ;

At this point your testdb will be available for use and you will be able to read and write to the database. The testdb datafiles will not have allocated any storage for the datafiles. As you start running your test work load, inserts on the datafiles will allocate space on demand.

The above same procedure can be repeated for multiple test database instances using the same backup files. If you have a storage level snapshot taken on the datafile, the rman backup file names can be replaced with the storage snapshot names.

Since all the testdb clones are using the rman backup or the storage snapshot as the backend storage, availability of these files is critical for testdb to run. If the backup files become unavailable, you will see errors in the testdb database.

When it is time to destroy the testdb environment, all files in the test environment can be deleted without any impact on the production or backup environment. As this clonedb feature uses the backup piece as the backing storage, there is no pressure on the I/O subsystem that is servicing the production database. DBA's don't have to hunt for storage equivalent to the size of production database to set up a testdb environment. The testdb storage space usage grows at the speed at which the data is modified.

Work-in-progress - A perl script to automate the above steps is in progress. Using this script one will be able to clone a database by issuing a single command after setting few environment variables.



3. dnfs性能测试


外部链接:
New DNFS Performance Results
Monitoring Direct NFS with Oracle 11g and Solaris… pealing back the layers of the onion.
Direct NFS vs Kernel NFS bake-off with Oracle 11g and Solaris… and the winner is
Kernel NFS fights back… Oracle throughput matches Direct NFS with latest Solaris improvements




-fin-
Website Analytics

Followers