My openstack-on-nspawn work

Hein-Pieter van Braam 65ced4aa66 Enable memcached for keystone 7 anni fa
README.md b6e93cc63c Add initial version for build-image.py 7 anni fa
build-image.py faf5ba8520 Add a 'parent' feature to the image builder 7 anni fa
images.yml 65ced4aa66 Enable memcached for keystone 7 anni fa

README.md

OpenStack on systemd-nspawn

Notes and scripts so far

The basic idea

  • Run a modern Linux distribution with modern kernels and debug tools
  • Run the OpenStack software itself using the most tested software combinations

To this end we'll run Fedora 26 with CentOS7 containers running RDO packages.

Puppet

We will use Puppet to manage the base OS as well as the Configuration for the containerized OpenStack components. So for instance the Keystone container contains:

  • CentOS7 (systemd, systemd-networkd)
  • OpenStack keystone (centos-release-openstack-newton provided)
  • HTTPD from CentOS7

The services in the container are all configured to start at boot (httpd in the case of keystone) while Puppet will manage /var/lib/machines/keystone/etc/keystone/keystone.conf From the host.

Container-side network configuration

# cat /var/lib/machines/keystone/etc/systemd/network/80-container-host0.network 
[Match]
Virtualization=container
Name=host0

[Network]
DHCP=yes
LinkLocalAddressing=yes

Other container-side configuration

  • Ensure that pts/0 is in securetty
  • Ensure systemd-networkd is started at boot
  • Ensure a root password is set

This will just make sure everything 'just works'

Host-side nspawn configuration

# cat /etc/systemd/nspawn/keystone.nspawn 
[Exec]
PrivateUsers=no

[Network]
Port=35357:35357
Port=5000:5000

This will forward the keystone ports from the host to the container without any further interference.

(PrivateUsers=no is required to allow httpd to install suexec, if we switch to another httpd we may be able to use private users)

Other host-side configuration

  • Ensure systemd-networkd is started

systemd-nspawn and network namespaces for neutron-l3,dhcp,lbaas-agent

We can't create network namespaces from inside nspawn containers as long as nspawn doesn't set the shareable flag on the rootfs of the container. So for Neutron we need a workaround.

We need to create network namespaces in the host system's mount namespace. We can actually enter the mount namespace of the host if we have a file descriptor into the hosts' mount namespace available. Normally this would not be the case but we can make sure we get access to it by bind mounting the /proc filesystem of the host into the container at a different location. We then just need to make sure that calls to 'ip' get intercepted and ran in the host mount namespace.

The neutron agents don't call '/sbin/ip' but just 'ip' so by modifying the service file for the agents to include a different path we can make it call our own ip without modifying /sbin/ip.

The wrapper script:

#!/bin/bash
if [ "$1" == "netns" ]; then
  if [ "$2" == "add" ] || [ "$2" == "delete" ]; then
    nsenter --mount=/hostproc/1/ns/mnt /sbin/ip "$@"
    exit $?
  fi
fi

/sbin/ip "$@"
exit $?

The nspawn file for the container:

[Exec]
PrivateUsers=no

[Network]
Private=no

[Files]
Bind=/var/run/netns:/var/run/netns
Bind=/proc:/hostproc

Note that we really only need to run namespace creation and destruction in the host mount namespace. Exec and everything else can remain in the container.

Image building

Some small tooling for the creation of the nspawn container. Currently runs only on DNF based distros. See build-image.py and images.yml