Blog

Configuring Multiple Block Storage Backends in OpenStack Cinder

Picture of Mihail Vukadinoff
Mihail Vukadinoff
CTO & DevOps Lead
10.06.2017
Reading time: 3 mins.
Last Updated: 12.02.2024

Table of Contents

If you’re an administrator of virtualized environments you have definitely run into IO performance issues. IO is the first bottleneck that one hits. Luckily persistent storage has evolved throughout the years and lately, we see the high performant SSDs at a reasonable price, still far higher to allow organizations to fully migrate to SSD.

Hybrid environments become more and more popular as they combine the low costs of traditional HDD with the high performant Solid-state drives. One of the key features of the Cinder storage back-end component of OpenStack is the flexibility that allows us to have more than one storage backend on our storage node. This gives us the flexibility to differentiate the IO-heavy applications from the more computer-oriented ones that are heavier on CPU usage. A typical example is to configure a database VM to run from an SSD drive and the application server cluster to be on normal storage that is heavily read-only during startup.

How to achieve that with OpenStack and Cinder

In our environment, we use the NFS volume driver which allows for easy-to-configure distributed volumes that can be mounted across all compute nodes. It’s a different debate whether this is the most appropriate and stable solution given the fact there is a wide variety of storage backends – ceph, glusterfs, iSCSI based ones. You can apply the differentiation of SSD and HDD with any one of them.

Our first task will be to have different mount points on our server residing on LVM or directly a physical partition (not recommended) on both our SSD storage array or disk and our plain old HDDs. We need to define those mount points as different exports in /etc/exports.

/exporthdd 192.168.0.0/24(rw,no_subtree_check,sync,no_root_squash)  # this is a mountpoint of plain hard disk physical devices
/exportssd 192.168.0.0/24(rw,no_subtree_check,sync,no_root_squash)  # this is a mountpoint for the SSD physical disks

Make sure NFS exports are loaded with the exportsfs command.

As described in the RedHat documentation of OpenStack, we have the option to define multiple configuration blocks in the cinder.conf file. However, it leaves the impression that they are for defining different storage drivers. While in fact, this is the most typical use case, cinder doesn’t limit you to define multiple blocks with the same driver.

[NAME]
volume_group=GROUP
volume_driver=DRIVER
volume_backend_name=BACKEND

Now edit the main configuration file:

[nfsssd]
nfs_used_ratio=0.95
nfs_oversub_ratio=1.0
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares_ssd.conf
volume_backend_name=nfsssd

It’s very important to define a volume_backend_name. This will allow this backend to appear in the Horizon GUI and also to be selectable when creating a volume via the command line. To enable this backend we also have to specify it in the same configuration file /etc/cinder/cinder.conf in the top section in a comma-separated list.

enabled_backends=XXX,XXX,nfs,nfsssd

As seen in the nfsssd section we define a different nfs shares config file which describes the mount points and paths to the

To allow users to specify which backend their volumes are created on rather than relying on the scheduler to select automatically a volume type must be defined in the database. Once the services are restarted and running this can be done using these commands:

cinder type-create nfsssd
cinder type-key <the id returned from the create command> set volume_backend_name='nfsssd'

This is the relation to the volume back-end name specified in the configuration and will allow us to choose when creating a volume what type it should be or in other words on which back-end it should reside.

To test out the create you can use the command line tool specifying the volume type.

cinder create --display-name test1 --volume-type nfsssd 2

Now this volume can be mounted to any of our virtual machines.

I hope this article helped you, I would appreciate it if you leave comments on how to optimize more the ssd backend, whether it’s better for performance if it’s not on LVM but directly on disk.

Also, I would like to hear your stories, when did you need to use different volume back-ends and what would you prefer instead of NFS?

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

Introduction Integrating Alertmanager with Microsoft Teams enables you to receive alerts directly in Teams channels, facilitating swift collaboration with your team to address issues promptly. This guide will use Prometheus-MSTeams,...
Reading
Managing Terraform locals efficiently is crucial for creating clean, maintainable, and reusable configurations. This guide will cover what Terraform locals are, how to implement them, and best practices for their...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.