技术头条 - 一个快速在微博传播文章的方式     搜索本站
您现在的位置首页 --> Oracle --> Oracle11g Direct NFS 测试

Oracle11g Direct NFS 测试

浏览:2927次  出处信息

    这几天测试了一下oracle11g Direct NFS 的功能,发现ORACLE Direct NFS是通过建立多个到NFS Server的TCP连接来提高IO的并发能力的。前面,我们提过,NFS的IO能力不高的原因是,NFS client端到NFS Server的操作是串行的,正常的NFS client到NFS Server端只建立一个连接,而且只有等前一个请求处理完成后,后一个请求才能处理,这样在随机读IO上就上不去。而Oracle Directd NFS与NFS Server建立多个TCP连接,处理就可以并发进行了,这样从理论上说就可以大大提高NFS的性能。

    而在实际发现Direct NFS读的时候很快,实测到达到了400Mbytes/s,基本没有发现瓶颈,但写的时候,发现比较慢,insert数据时,写流量只有3.4Mbytes/s左右,写为何这么慢原因不明,估计是Linux的NFS Server与Oracle Direct NFS配合不好导致。

    当使用rman备份时,如果备份的路径在Direct NFS指定的路径中时,也会自动走到Direct NFS模式下。

    测试过程:

    先修改odm库,启动支持Direct nfs的odm库:

    [oracle@nfs_client lib]$ ls -l *odm*
-rw-r-r- 1 oracle oinstall 54764 Sep 11  2008 libnfsodm11.so
lrwxrwxrwx 1 oracle oinstall    12 Jul  8 18:55 libodm11.so -> libodmd11.so
-rw-r-r- 1 oracle oinstall 12755 Sep 11  2008 libodmd11.so
[oracle@nfs_client lib]$ rm libodm11.so
[oracle@nfs_client lib]$ ln -s libnfsodm11.so libodm11.so

    在nfs server机器中共享一个目录,为了使用硬盘不会成为IO瓶颈,使用8块盘做一个raid0,然后做ext3文件系统,做为nfs Server的输出:

    mdadm -C /dev/md0 -level raid0 -c 8 -n 8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
mkfs -t ext3 /dev/md0
mount /dev/md0 /nfs

    然后在/etc/exportfs中配置:

    /nfs 192.168.172.132(rw,no_root_squash,insecure)
service nfs restart

    在数据库主机上(nfs client端):

    [oracle@nfs_client dbs]$ cat oranfstab
server: node_data1
path: 192.168.172.128
export: /nfs mount: /opt/oracle/oradata/nfs

    mount -t nfs 192.168.172.128:/nfs /opt/oracle/oradata/nfs

    两台机器通过万兆网卡连接,测试过网络速度可以达到800Mbytes/s以上。

    建一个数据库:

    CREATE DATABASE oratest
   USER SYS IDENTIFIED BY sys
   USER SYSTEM IDENTIFIED BY system
   CONTROLFILE REUSE
   LOGFILE GROUP 1 (’/opt/oracle/oradata/oratest/redo_1_1.log’) SIZE 200M REUSE,
       GROUP 2 (’/opt/oracle/oradata/oratest/redo_2_1.log’) SIZE 200M REUSE,
       GROUP 3 (’/opt/oracle/oradata/oratest/redo_3_1.log’) SIZE 200M REUSE,
       GROUP 4 (’/opt/oracle/oradata/oratest/redo_4_1.log’) SIZE 200M REUSE,
       GROUP 5 (’/opt/oracle/oradata/oratest/redo_5_1.log’) SIZE 200M REUSE
   MAXLOGFILES 20
   MAXLOGMEMBERS 5
   MAXLOGHISTORY 1000
   MAXDATAFILES 1000
   MAXINSTANCES 2
   noARCHIVELOG
   CHARACTER SET US7ASCII
   NATIONAL CHARACTER SET AL16UTF16
   DATAFILE ‘/opt/oracle/oradata/oratest/system01.dbf’ SIZE 2046M REUSE
   SYSAUX DATAFILE ‘/opt/oracle/oradata/oratest/sysaux01.dbf’ SIZE 2046M REUSE
   EXTENT MANAGEMENT LOCAL
   DEFAULT TEMPORARY TABLESPACE temp
      TEMPFILE ‘/opt/oracle/oradata/oratest/temp01.dbf’ SIZE 2046M REUSE
   UNDO TABLESPACE undotbs1
      DATAFILE ‘/opt/oracle/oradata/oratest/undotbs01.dbf’ SIZE 2046M REUSE
 SET TIME_ZONE = ‘+08:00′;
 
再建一个表空间tbs_testd在在nfs上:

    create tablespace tbs_test datafile ‘/opt/oracle/oradata/nfs/test01.dbf’ size 2047M;
 
 
SQL> col svrname format a40
SQL> col dirname format a40
SQL> set linesize 200
SQL> select * from v$dnfs_servers;

            ID SVRNAME     DIRNAME       MNTPORT    NFSPORT      WTMAX      RTMAX
―――- ―――――――――――――- ―――――――――――――- ―――- ―――- ―――- ―――-
         1 nfs_server      /nfs          907       2049      32768      32768

    1 row selected.

     

    col filename format a40
select * from v$dnfs_files;

    SQL> select * from v$dnfs_files;

    FILENAME                                   FILESIZE       PNUM     SVR_ID
―――――――――――――- ―――- ―――- ―――-
/opt/oracle/oradata/nfs/test01.dbf       2145394688          9          1

    SQL> col path format a30
SQL> select * from  V$DNFS_CHANNELS;

      PNUM SVRNAME    PATH   CH_ID   SVR_ID   SENDS   RECVS   PINGS
―――- ―――――――――――――- ―――――――――― ―――- ―――- ―――- ―――- ―――-
       5 nfs_server        192.168.172.128      0          1          9         25          0
       9 nfs_server        192.168.172.128      0          1         28         75          0
      11 nfs_server       192.168.172.128      0          1         96        250          0
      12 nfs_server       192.168.172.128      0          1        166        552          0
      13 nfs_server       192.168.172.128      0          1        216        955          0
      14 nfs_server       192.168.172.128      0          1          3          7          0
      15 nfs_server       192.168.172.128      0          1        351       1057          0
      17 nfs_server       192.168.172.128      0          1        899       2708          0
      18 nfs_server       192.168.172.128      0          1          3          7          0
      19 nfs_server       192.168.172.128      0          1          2          4          0
      20 nfs_server       192.168.172.128      0          1         10         30          0
      21 nfs_server       192.168.172.128      0          1         37        109          0
      22 nfs_server       192.168.172.128      0          1         18         52          0

    13 rows selected.

    在NFS server上查看到2049端口的连接:

    [root@nfs_server data]# netstat -an |grep 2049
tcp        0      0 0.0.0.0:2049                0.0.0.0:*                   LISTEN
tcp        0      0 192.168.172.128:2049    192.168.172.132:14111       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:51478       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:61228       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:52532       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:10827       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:31047       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:55132       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:866         ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:32634       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:54646       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:47987       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:22448       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:49091       ESTABLISHED

    执行:

    insert into test select * from test;时
 
使用自已写的查看网卡流量的脚本iftop查看网络流量中,可以写流量只有3.4Mbytes/s

     ifname   in_kbytes/s out_kbytes/s all_kbytes/s in_packets/s out_packets/s all_packets/s
――― ―――- ―――― ―――― ―――― ――――- ――――-
     eth2        3133           99         3232         2370           770          3140
     eth2        3364          147         3511         2559           837          3396
     eth2        3630         1511         5142         2828          1845          4673
     eth2        3315          103         3419         2517           785          3302
     eth2        3380          105         3486         2535           796          3331
     eth2        3627          113         3741         2718           854          3572
     eth2        3610          112         3722         2704           853          3557
     eth2        3586          113         3700         2713           862          3575
     eth2        3471          107         3579         2589           804          3393
     eth2        3470          108         3578         2618           822          3440
     eth2        3347          105         3453         2525           807          3332
     eth2        3406          106         3512         2549           809          3358
     eth2        3351          106         3458         2547           814          3361
     eth2        3248          101         3349         2427           769          3196
     eth2        2743           87         2831         2080           666          2746

    而执行select count(*) from test;时可以看到网络流量很高,高的时候达到400Mbytes/s.

    在NFS Server端查看连接到2049端口的连接数,可以看到有很多个连接,这与使用操作系统的NFS client端是不一样的,使用操作系统的NFS client端,到服务器的连接只有一个,由此可见,Oracle Direct NFS通过与服务器建立多个TCP连接来实现高并发IO,从而提升NFS的性能。连接的数目的多少与压力的大小有关,压力越大,连接数越多。

    [root@nfs_server nfs]# netstat -an |grep 2049
tcp        0      0 0.0.0.0:2049                0.0.0.0:*                   LISTEN
tcp   166768      0 192.168.172.128:2049    192.168.172.132:20048       ESTABLISHED
tcp   173716    140 192.168.172.128:2049    192.168.172.132:22625       ESTABLISHED
tcp   172772      0 192.168.172.128:2049    192.168.172.132:28796       ESTABLISHED
tcp   170832      0 192.168.172.128:2049    192.168.172.132:4468        ESTABLISHED
tcp   171764    140 192.168.172.128:2049    192.168.172.132:42147       ESTABLISHED
tcp   172684      0 192.168.172.128:2049    192.168.172.132:63693       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:48835       ESTABLISHED
tcp   170500      0 192.168.172.128:2049    192.168.172.132:57326       ESTABLISHED
tcp   171772      0 192.168.172.128:2049    192.168.172.132:43246       ESTABLISHED
tcp        0      0 192.168.172.128:2049    192.168.172.132:36080       ESTABLISHED
udp        0      0 0.0.0.0:2049                0.0.0.0:*

建议继续学习:

  1. Linux下的NFS    (阅读:3182)
  2. NFS随机IOPS性能不高的分析    (阅读:3091)
  3. 基于DRBD的高可用NFS解决方案分析    (阅读:1098)
QQ技术交流群:445447336,欢迎加入!
扫一扫订阅我的微信号:IT技术博客大学习
© 2009 - 2024 by blogread.cn 微博:@IT技术博客大学习

京ICP备15002552号-1