Setting up Direct NFS on Oracle 12c
Posted by Martin Bach on July 9, 2014
Direct NFS is a great feature that I have finally had the time to investigate further. Since I always forget how to set it up and I didn’t find blog posts about this subject elsewhere I decided to put something together.
In this configuration I am using a virtual machine named server1 to export a directory to server2. Oracle is not as lenient as myself and may have certain support requirements when it comes to dNFS servers but I just wanted to get started.
The export of the NFS mount is shown here:
[root@server1 ~]# cat /etc/exports /u01/oraback *.example.com(rw,sync,all_squash,insecure,anonuid=54321,anongid=54321)
There is nothing too special about the export definition here. The all_squash directive normally uses a uid and gid of 65534. Since this is most likely not matching the oracle user I chose to override this behaviour. You may have already guessed that I am using the oracle preinstall RPMs which use 54321 for the oracle account and oinstall group respectively. The “all_squash” directive maps all uids and gids to the anonymous user by default.
Oracle also recommends tweaking some network related parameters in /etc/sysctl.conf. They appear to have to do with better performance, but I didn’t have time to verify that claim yet. On the other hand, it seems that from kernel 2.6 onwards Linux tunes the send and receive buffers automatically, and it’s probably a good idea not trying to outsmart it.
Using “service nfs start” I started the NFS server process on my Oracle Linux 6.4 system. I had appropriate firewall rules in place, if you have a firewall then you might need to do the same.
Mounting the file system on the second node requires a setting in the fstab file to begin with. In this example I want to mount the exported backups in /media/backups on server2.
The corresponding entry in fstab was:
... server1:/u01/oraback /media/backup nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 1 2
I kept the habit of enforcing NFSv3 even though 4 and 4.1 are now supported with 12c. With the fstab entry I can now mount the directory:
[root@server2 ~]# mount /media/backup [root@server2 ~]# mount | grep backup server1:/u01/oraback on /media/backup type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600,addr=192.168.56.44)
So far kernel NFS has been used, but not direct NFS. To do so, you need to create a mapping file named oranfstab. The file resides in $ORACLE_HOME/dbs. In this file you define how you get to the NFS filer. You can define up to 4 paths if you have multiple NICs in your database server. My VM does not so the most basic setup is used:
[root@server2 ~]# cat /u01/app/oracle/product/22.214.171.124/dbhome_1/dbs/oranfstab server: server1 local: 192.168.56.45 path: 192.168.56.44 export: /u01/oraback mount: /media/backup export: /m/CLONE mount: /u01/oradata/CLONE [root@server2 ~]#
The “server” directive gives the NFS filer a name. Multiple sections are introduced using the “server” keyword.
Following the server keyword you define how you get the data: the “local” IP address shown here is the IP of server2. The “path” keyword indicates the path to the NFS filer, server1 or 192.168.56.44. And finally you tell Oracle the name of the export on the filer (/u01/oraback) and where it is mounted locally (/media/backup).
The last step necessary is to enable dNFS:
[oracle@server2 lib]$ make -f ins_rdbms.mk dnfs_on rm -f /u01/app/oracle/product/126.96.36.199/dbhome_1/lib/libodm12.so; cp /u01/app/oracle/product/188.8.131.52/dbhome_1/lib/libnfsodm12.so /u01/app/oracle/product/184.108.40.206/dbhome_1/lib/libodm12.so
All done! After the next start of your database you should see the a reference to the ODM library in the alert.log similar to this one:
... NOTE: remote asm mode is local (mode 0x1; from cluster type) Sun Nov 17 14:30:41 2013 Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0 Starting background process PMON Starting background process PSP0 ...
Hopefully you will now be able to query the metadata views too.
SQL> select * from v$dnfs_servers ID SVRNAME DIRNAME MNTPORT NFSPORT NFSVERSI WTMAX RTMAX CON_ID ---------- ---------- -------------------- ---------- ---------- -------- ---------- ---------- ---------- 1 server1 /m/clone 52690 2049 NFSv3.0 1048576 1048576 0
That’s all there is to say about dNFS for this post. Oh and if at first the query against v$dnfs_servers does not return anything, it doesn’t necessarily imply a problem. I just created a file on the mount point and – as if by magic – dNFS kicked in, opened the channel and opened the file descriptors.
MOS Doc ID 1464567.1: Collecting The Required Information For Support To Troubleshot DNFS (Direct NFS) Issues (11.1, 11.2 & 12c)