A company in the Streaming & Media sector needed a new NFS Network File System server to be used by their streaming servers to transfer large recording files for the time-shift function.
What is a Network File System NFS?
NFS, or Network File System, is a collaboration system that allows users to view, store, update, or share files on a remote computer as though it were a local computer. Besides facilitating local access to remote files, NFS is noteworthy because of its host authentication. It is pretty simple and makes it possible to connect to another service using an IP address only. Moreover, NFS file share provides the below-mentioned advantages:
- provides central management;
- allows to log into any server and have access to files;
- can be secured with firewalls and Kerberos;
- has been around for quite some time, so it makes it easy to get acquainted with the applications;
- doesn’t require a manual refresh for new files.
The first thing we did was to create a new virtual machine. We used it for testing purposes, to check if the machines, which will be using the NFS, can see it and mount a share of it, which represented the share of the NFS in this case. For this client, we use CloudStack to create and manage virtual machines. So, because there wasn’t LUN connected to the NFS yet, in order to imitate a shared volume we created one additional 5GB volume in CloudStack and mounted it to the machine.
Also, we are using Puppet for configuration management.
So, in Puppet we needed to create a new configuration file for the NFS. Here we provide a description:
- Which range of IPs can connect to the NFS server and which ports are to be opened? You should determine the range, based on the machines, which will be using the NFS server.
- The mount point is where the LUN storage will be connected.
- The file system type.
- Which device will be used (like for example /dev/sda1) and what are some options?
Another important thing is also managed by Puppet, and it is the content of the, etc/exports file. This file basically tells which clients will use the NFS, what rights they have, and also which share of the NFS will they be using.
We then tested if the NFS clients can see and mount the test machine.
The test was successful, so we were ready to add the configuration for the real NFS server. There are a few important things to configure here. Again, since we are using Puppet for configuration management, we needed to add those configurations in the proper place in our Puppet repo:
- The protocol that the clients will use to send commands to the storage device – in our case, this is ISCSI, which is an IP-based storage networking standard for linking data storage facilities.
- Username and password for the ISCSI connection. We needed to communicate with the NFS provider in order to agree on those two things.
- The authentication method will be used for the ISCSI session.
In our case it is CHAP
- ISCSI initiator name is generated after manually installing the ISCSI package on one of the machines, which automatically generates a file with the initiator name in it. The file can be found in the ISCSI config directory
- In order to auto-mount the new NFS share to its clients we needed to create a new module in Puppet, specifically designed for this NFS share.
As an example, we used an already existing one but modified some of the classes and manifests in it.
After we applied the above configurations in Puppet, we distributed them to the NFS clients and they successfully mounted the new NFS share. In short, this case study concludes our experience with adding a new NFS mount.
If you have questions on any step of our configuration, you may contact us and we’ll get back to you with the answers needed.