I'm not a big NFS fan ever since I worked as a Linux/Unix administrator way back in the good old days. Sometimes when the NFS server hung, lost network connectivity or something else happened all clients that had mounts from the NFS completely blocked waiting for it to come back up, because it is so deep in the kernel. Аll commands, even "ls", froze and the only cure was forcibly rebooting them to get them back online. Neat, eh?When NFS v4.1 emerged, back in 2010 hopes were that it will fix everything. I was a bit sceptical but decided to give it a shot and true, many new fixes in the protocol and implementation were made that enhanced the stability. Some of them were: blocking locks that allow client to ping the server if the lock is released, not only wait for notifications, timeout for server unavailable, parallel access. From what I saw, I couldn’t really break it beyond repair.As time went by Openstack offered the option to have NFS as a storage backend. We decided to use it for one deployment where we saw this technology as appropriate, because we didn't need highly available storage with replication that occupies twice the space, but we needed Cinder volumes to get mounted across the hypervisors. I had a feeling that something could go wrong while making the installation, because I remembered all those nights rebooting servers from ILo / IMPI.