Deployment Choices: NFS Security

Deployment Choices: NFS Security

Options

Starting with the Kilo release of OpenStack, two NAS security options, nas_secure_file_operations and nas_secure_file_permissions, were introduced to improve NFS security.

The Cinder design assumes that the connections from OpenStack nodes to NFS backends use trusted physical networks. Also, OpenStack services are assumed to run on dedicated nodes whose processes are trusted. Exposure of storage resources to tenants is mediated by the hypervisor under control of Cinder and Nova. Prior to the introduction of the two NAS security options, operations on the backing files, for Cinder volumes, only ran as root and the files themselves were readable and writable by any user on OpenStack nodes that mounted the NFS backend shares.

Note

Consult with the OpenStack distribution documentation to determine supportability of this feature.

Option

Type

Default Value

Description

nas_secure_file_operations

Optional

“auto”

Run operations on backing files for Cinder volumes as cinder user rather than root if ‘True’; as root if ‘False’. If ‘auto’, run as ‘True’ if in a “greenfield” environment and run as ‘False’ if existing volumes are found on startup.

nas_secure_file_permissions

Optional

“auto”

Create backing files for Cinder volumes to only be readable and writable by owner and group if ‘True’; as readable and writable by owner, group, and world if ‘False’. If ‘auto’, run as ‘True’ if in a “greenfield” environment and run as ‘False’ if existing volumes are found on startup.

Table 4.15. Configuration options for NFS Security

When nas_secure_file_operations is set to ‘true’, Cinder operations run as the dedicated cinder user rather than as root.

When nas_secure_file_permissions is set to ‘true’, backing files for Cinder volumes are only readable and writable by owner and group (mode 0660 rather than 0666). Since Cinder creates these files with both owner and group cinder, only system processes running with UID or GID cinder are allowed to read or write these files.

When both nas_secure_file_operations and nas_secure_file_permissions are set to ‘true’ setting the Superuser Security Type (also known as “root squash”) to none on the ONTAP export policy is recommended.

The default value of both of these options is ‘auto’. For backwards compatibility, if cinder volumes already exist when Cinder starts up and the value of one of these options is ‘auto’, it is set to ‘false’ internally. If no cinder volumes already exist, a greenfield environment, the option is set to ‘true’ and a marker file .cinderSecureEnvIndicator is created under the mount directory. On startup, the marker file is checked so that the ‘true’ option will be persisted for subsequent startups after volumes have been created. It is recommended that the nas_secure_file_operations and nas_secure_file_permissions are both specified in the /etc/cinder.conf as either ‘true’ or ‘false’.

Setup

When NAS security options are enabled, OpenStack Cinder, Nova, and Glance nodes, as well as Data ONTAP, must be configured appropriately for OpenStack operations to succeed. If not configured appropriately read and write operations of the backing files will fail.

The objective of enabling NFS security options is to limit read and write permissions of the backing file for cinder volumes to only the cinder user’s UID and GID.

ONTAP Setup

OpenStack Setup

  • For NFSv4 only, set the Domain in /etc/idmapd.conf on both storage and compute nodes to match that of the NFS server. Restart idmapd service.

  • On Nova nodes, add the nova user to the cinder group.

  • On Nova nodes, set “user = ‘nova’”, “group = ‘cinder’”, and “dynamic_ownership = 0” in /etc/libvirt/qemu.conf.

  • On Nova nodes, restart libvirt, qemu, and kvm processes or reboot the hypervisor. Also, restart Nova services.

NFS Security Checklists

The following checklists can be used as a reference to disable or enable NAS security. Some commands may differ based on the version of Linux being used. Disabling or enabling NAS security is disruptive to OpenStack processes.

How to Enable NFS Security

The following checklist provides a reference for enabling NFS security. Be advised that some of the commands, shown in the examples, may differ depending on your environment.

#

Description of Step

Done?

1

Cinder: Change NAS security options

2

Cinder: Determine cinder user’s UID and GID

3

Nova & Glance: Add users to cinder group

4

QEMU: Change QEMU configuration

5

Idmapd: Edit the Domain setting (NFSv4.x only)

6

ONTAP: Add nova, glance, and cinder users to cinder GID

7

ONTAP: Disable superuser access in export policy

8

ONTAP: Set exported Flexvol owner and group (Optional)

9

ONTAP: Set exported Flexvol permissions (Optional)

10

Cinder: Update file permissions

11

Cinder: Verify mounts and permissions

Table 4.16. Checklist to Enable NAS Security

1) Change NAS security options.

Set the nas_secure_file_operations and nas_secure_file_permissions to specify the NAS security mode. Make changes to /etc/cinder/cinder.conf in the backend’s configuration stanza.

[replace-with-nfs-backend]
...
nas_secure_file_operations = true
nas_secure_file_permissions = true
...

2) Determine the cinder user’s UID and GID.

$ id -u cinder
500
$ id -g cinder
510

3) Add users to cinder group.

To have file access, Nova and Glance service users need to belong to the same group as the Cinder user. This step needs to be performed on each node running Nova or Glance services.

...
$ usermod -a -G replace-with-cinder-GID nova
$ usermod -a -G replace-with-cinder-GID glance
...
$ id nova
uid=520(nova) gid=521(nova) groups=510(cinder),...
$ id glance
uid=530(glance) gid=531(glance) groups=510(cinder),...
...

4) Change QEMU configuration.

Certain compute operations (i.e. attaching a volume) require that Libvirt, Qemu, and KVM run as a user belonging to the correct group. Edit the /etc/libvirt/qemu.conf file and make the following changes.

...
#user = "root"
user = "nova"
...
#group = "root"
group = "cinder"
...
#dynamic_ownership = 1
dynamic_ownership = 0
...

Note

After making the configuration changes restart the needed libvirt, QEMU, KVM processes or restart the hypervisor. The Nova services also need to be restarted. This is a disruptive operation that may require planning depending on your environment.

5) Edit the Domain setting for idmapd (NFSv4.x only).

Idmapd is the NFSv4 bidirectional ID/name mapping daemon. The domain defined in the /etc/idmapd.conf must match the NFS server domain. The first step is to query ONTAP for the domain. The second step is to then edit the /etc/idmapd.conf file and restart the idmapd service. This step is not necessary if using NFSv3.

...
CDOT:> vserver nfs show -vserver replace-with-vserver-name -fields v4-id-domain
...
vserver  v4-id-domain
-------- ---------------------
replace- nfsv4domain.somewhere.com
...

Edit the /etc/idmapd.conf on the Cinder node:

...
Domain = nfsv4domain.somewhere.com
...

6) Add nova, glance, and cinder users to cinder GID.

If local files are used, then the cluster leverages the unix-user and unix-group tables created for the specified SVM. The nova, glance, and cinder SVM users, if they exist, need to belong to the same cinder GID (510) as used by the cinder service. The nova and glance users can be created with the ONTAP CLI unix-user create command if needed.

...
CDOT:> unix-group create -vserver replace-with-vserver-name -name cinder -id replace-with-cinder-GID
CDOT:> unix-group show -vserver replace-with-vserver-name
...
Vserver        Name                ID
-------------- ------------------- ----------
replace-with-  cinder              510
...
CDOT:> unix-user modify -vserver replace-with-vserver-name -user nova -primary-gid replace-with-cinder-GID
CDOT:> unix-user modify -vserver replace-with-vserver-name -user glance -primary-gid replace-with-cinder-GID
CDOT:> unix-user modify -vserver replace-with-vserver-name -user cinder -primary-gid replace-with-cinder-GID
CDOT:> unix-user show -vserver replace-with-vserver-name
...
               User            User   Group  Full
Vserver        Name            ID     ID     Name
-------------- --------------- ------ ------ --------------------------------
replace-with-  cinder          500    510
replace-with-  nova            501    510
replace-with-  glance          502    510
...

Note

NetApp recommends leveraging either NIS or LDAP for name services in larger environments.

7) Set exported Flexvol owner and group.

Access to a Flexvol can be further restricted by only allowing a specific User ID (UID) and Group ID (GID). The UID must match the cinder UID of the Cinder node. The GID must match the cinder GID of the Cinder node. In this example, the UID is 500 and the GID is 510. These values will be different on your cinder node and must be determined prior to running the following commands.

CDOT:> volume show -vserver replace-with-vserver-name -volume replace-with-volume-name
...
User ID: 0
Group ID: 0
...
CDOT:> volume modify -vserver replace-with-vserver-name -volume replace-with-volume-name -user replace-with-cinder-UID -group replace-with-cinder-GID
CDOT:> volume show -vserver replace-with-vserver-name -volume replace-with-volume-name
...
User ID: 500
Group ID: 510
...

8) Set exported Flexvol permissions.

Access can be further restricted by setting the UNIX permissions on a volume. In this example we set the Flexvol permissions, of the shared volume, to 0755. This step is optional.

CDOT:> volume show -vserver replace-with-vserver-name -volume replace-with-volume-name
...
UNIX Permissions: ---rwxrwxrwx
...
CDOT:> volume modify -vserver replace-with-vserver-name -volume replace-with-volume-name -unix-permissions 0755
CDOT:> volume show -vserver replace-with-vserver-name -volume replace-with-volume-name
...
UNIX Permissions: ---rwxr-xr-x
...

9) Update file permissions to 0660.

Other OpenStack services (i.e. Nova and Glance) need “group” rw privileges in order to access the cinder volumes. This is accomplished by running chmod 0660 on all files in the mount points. Verify that the IP address, of the mount point, matches a LIF IP address of the correct SVM prior to executing the chmod and chown commands. Order of operations are stop Cinder services, run chmod and chown, and unmount mount points.

$ systemctl stop openstack-cinder-{api,scheduler,volume}
$ mount
...
192.168.100.10:/cinder_flexvol_1 on /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c type nfs (rw,...,addr=192.168.100.10)
192.168.100.10:/cinder_flexvol_2 on /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b type nfs (rw,...,addr=192.168.100.10)
...
$ cd /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c
$ chmod -R 0660 *
$ chown -R cinder:cinder
$ cd /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b
$ chmod -R 0660 *
$ chown -R cinder:cinder
$ cd /var/lib/cinder/mnt
$ sudo umount 69809486d67b39d4baa19744ef3ef90c
$ sudo umount 5821d3908bfae68920f0c7be2dfc0c7b

10) Disable superuser access in export policy.

Disabling superuser access in the export policy is effectively the same as enabling root squash. Any root access from a NFS client (i.e. UID 0) is remapped to the anonymous user, default UID is 65534, when superuser access is disabled. This step also disables set user ID (suid) access. The following example also disables set user ID (suid) and set group ID (sgid) access.

CDOT:> vserver export-policy rule show -vserver replace-with-vserver-name -policyname replace-with-policy-name -fields superuser,allow-suid
...
vserver  policyname ruleindex superuser allow-suid
-------- ---------- --------- --------- ----------
replace- cinder     1         any       true
...
CDOT:> vserver export-policy rule modify -vserver replace-with-vserver-name -policyname replace-with-policy-name -ruleindex replace-with-rule-index -protocol nfs -superuser none -allow-suid false
CDOT:> vserver export-policy rule show -vserver replace-with-vserver-name -policyname replace-with-policy-name -fields superuser,allow-suid
...
vserver  policyname ruleindex superuser allow-suid
-------- ---------- --------- --------- ----------
replace- cinder     1         none      false
...

11) Verify mounts and permissions.

In the previous step we unmounted the NFS mounts to prove that they are mounted properly when the Cinder volume service starts. Verify this by starting Cinder services, examining the Cinder volume service log, creating a new Cinder volume, and listing the volume on the mount point.

$ systemctl start openstack-cinder-{api,scheduler,volume}
$ cinder create --name test-vol-01 1
...
| id                             | 9c989cba-eff6-4847-b5fc-bff2ab5d35da |
...
$ ls -l /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b/volume-9c989cba-eff6-4847-b5fc-bff2ab5d35da
...
-rw-rw-rw- 1 root root 1073741824 Oct 12 13:15 /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b/volume-9c989cba-eff6-4847-b5fc-bff2ab5d35da
...

How to Disable NFS Security

The following checklist provides a reference for disabling NFS security. Be advised that some of the commands, shown in the examples, may differ depending on your environment.

#

Description of Step

Done?

1

Cinder: Update NAS security options

2

ONTAP: Allow Superuser access in export policy

3

Cinder: Update file permissions

4

Cinder: Delete .cinderSecureEnvIndicator file

5

Cinder: Verify mounts and permissions

Table 4.17. Checklist to Disable NFS Security

1) Update NAS Security options.

Set the nas_secure_file_operations and nas_secure_file_permissions to specify the NAS security mode. Make changes to /etc/cinder/cinder.conf in the backend’s configuration stanza.

[nfs_backend]
...
nas_secure_file_operations = false
nas_secure_file_permissions = false
...

2) Enable Superuser access in the export policy.

CDOT:> vserver export-policy rule show -vserver replace-with-vserver-name -policyname replace-with-policy-name -fields superuser
...
vserver  policyname ruleindex superuser
-------- ---------- --------- ---------
replace- cinder     1         none
...
CDOT:> vserver export-policy rule modify -vserver replace-with-vserver-name -policyname replace-with-policy-name -ruleindex replace-with-rule-index -protocol nfs -superuser any
CDOT:> vserver export-policy rule show -vserver replace-with-vserver-name -policyname replace-with-policy-name -fields superuser
...
vserver  policyname ruleindex superuser
-------- ---------- --------- ---------
replace- cinder     1         any
...

3) Update file permissions to 0666.

Other OpenStack services (i.e. Nova and Glance) need “world” rw privileges in order to access the cinder volumes. This is accomplished by running chmod 0666 on all files in the mount points. Verify that the IP address, of the mount point, matches a LIF IP address of the correct SVM prior to executing the chmod and chown commands. Order of operations are stop Cinder services, run chmod, unmount mount points, and start Cinder services.

$ systemctl stop openstack-cinder-{api,scheduler,volume}
$ mount
...
192.168.100.10:/cinder_flexvol_1 on /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c type nfs (rw,...,addr=192.168.100.10)
192.168.100.10:/cinder_flexvol_2 on /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b type nfs (rw,...,addr=192.168.100.10)
...
$ cd /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c
$ chmod -R 0666 *
$ chown -R root:root
$ cd /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b
$ chmod -R 0666 *
$ chown -R root:root
$ cd /var/lib/cinder/mnt
$ sudo umount 69809486d67b39d4baa19744ef3ef90c
$ sudo umount 5821d3908bfae68920f0c7be2dfc0c7b
$ systemctl start openstack-cinder-{api,scheduler,volume}

4) Delete the .cinderSecureEnvIndicator file if it exists.

The Cinder volume service, under certain conditions, creates the .cinderSecureEnvIndicator file as an indicator that NAS security is enabled.

$ mount
...
192.168.100.10:/cinder_flexvol_1 on /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c type nfs (rw,...,addr=192.168.100.10)
192.168.100.10:/cinder_flexvol_2 on /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b type nfs (rw,...,addr=192.168.100.10)
...
$ cd /var/lib/cinder/mnt/69809486d67b39d4baa19744ef3ef90c
$ rm .cinderSecureEnvIndicator
$ cd /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b
$ rm .cinderSecureEnvIndicator

5) Verify mounts and permissions.

In the previous step we unmounted the NFS mounts to prove that they are mounted properly when the Cinder volume service starts. Verify this by examining the Cinder volume service log, creating a new Cinder volume, and listing the volume on the mount point. It is recommended that Nova services be restarted followed by verification that attaching a volume to a compute instance works.

$ cinder create --name test-vol-01 1
...
| id                             | 9c989cba-eff6-4847-b5fc-bff2ab5d35da |
...
$ ls -l /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b/volume-9c989cba-eff6-4847-b5fc-bff2ab5d35da
...
-rw-rw-rw- 1 root root 1073741824 Oct 12 13:15 /var/lib/cinder/mnt/5821d3908bfae68920f0c7be2dfc0c7b/volume-9c989cba-eff6-4847-b5fc-bff2ab5d35da
...