OpenStack ========= .. contents:: Table of Contents Introduction ------------ This guide is aimed to help Cloud Administrators through deploying, managing, and upgrading OpenStack. Releases ~~~~~~~~ Each OpenStack release starts with a letter, chronologically starting with A. These are usually named after the city where one of the recent development conferences were held. The major version number of OpenStack represents the major version number of each software in that release. For example, Ocata software is versioned as ``15.X.X``. A new feature release comes out after about 6 months of development. Every major release adheres to a maintenance cycle. Maintenance Phases <= Newton - Phase 1 = 6 months of stability and security fixes. - Phase 2 = 6 months of major stability and security fixes. - Phase 3 = 6 months of major security fixes. Maintenance Phases >= Ocata - Maintained = 18 months of stability and security fixes and official releases from the OpenStack Foundation. - Extended Maintenance (em) = Stability and security fixes by community contributors. There are no tagged minor releases. The code will be treated as a rolling minor release. A project can stay in extended maintenance for as long as it wants. - Unmaintained = 6 months of no community contributions. - End-of-life (eol) = The last version of that OpenStack release to be archived. [42] Releases: 1. Austin 2. Bexar 3. Cactus 4. Diablo 5. Essex 6. Folsom 7. Grizzly 8. Havana 9. Icehouse 10. Juno 11. Kilo 12. Liberty 13. Mitaka 14. Newton - Release: 2016-10-06 - EOL: 2017-10-11 15. Ocata - Release: 2017-02-22 - Goals [3]: - Stability. This release included features that are mainly related to reliability, scaling, and performance enhancements. This came out 5 months after Newton, instead of the usual 6, due to the minimal amount of major changes. [2] - Remove old OpenStack libraries that were built into some services. Instead, services should rely on the proper up-to-date dependencies provided by external packages. - `New Features `__ 16. Pike - Release: 2017-08-30 - Goals [4]: - Convert most of the OpenStack code to be compatible with Python 3. This is because Python 2 will become EOL in 2020. - Make all APIs into WSGI applications. This will allow web servers to scale out and run faster with tuning compared to running as a standalone Python daemon. - `New Features `__ 17. Queens - Release: 2018-02-28 - Goals [5]: - Remove the need for the access control list "policy" files by having default values defined in the source code. - Tempest will be split up into different projects for maintaining individual service unit tests. This contrasts with the old model that had all Tempest tests maintained in one central repository. - `New Features `__ - `Release Highlights `__ 18. Rocky - Release: 2018-08-30 - Goals [53]: - Make configuration options mutable. This avoids having to restart services whenever the configuration is updated. - Remove deprecated mox tests to further push towards full Python 3 support. - `New Features `__ - `Release Highlights `__ 19. Stein - Release: 2019-04-10 - Goals [54]: - Use Python 3 by default. Python 2.7 will only be tested using unit tests. - Pre-upgrade checks. Verify if an upgrade will be successful. Also provide useful information to the end-user on how to overcome known issues. - `New Features `__ - `Release Highlights `__ 20. Train - Release: 2019-10-16 - Goals [64]: - Fully support IPv6 environments where IPv4 is not available or used. - Test against Python 3.7 instead of 3.5. - Project documentation will additionally be built and provided as PDF files (instead of HTML webpages). - `New Features `__ - `Release Highlights `__ 21. Ussuri - Release: 2020-05-13 - Goals [65]: - Python 2.7 support has been fully removed. Python >= 3.5 is required. - Project Team Lead (PTL) documentation for each project. This will help with the transition of a new PTL taking charge of a given OpenStack project. - `New Feature `__ - `Release Highlights `__ [1] Services ~~~~~~~~ OpenStack has a large range of services that manage different different components in a modular way. Most popular services (50% or more of OpenStack cloud operators have adopted): - Ceilometer = Telemetry - Cinder = Block Storage - Glance = Image - Heat = Orchestration - Horizon = Dashboard - Keystone = Authentication - Neutron = Networking - Nova = Compute - Swift = Object Storage Other services: - Aodh = Telemetry Alarming - Barbican = Key Management - CloudKitty = Billing - Congress = Governance - Designate = DNS - Freezer = Backup and Recovery - Ironic = Bare-Metal Provisioning - Karbor = Data protection - Kuryr = Container plugin - Magnum = Container Orchestration Engine Provisioning - Manila = Shared File Systems - Mistral = OpenStack Workflow - Monasca = Monitoring - Murano = Application Catalog - Octavia = Load Balancing - Rally = Benchmark - Sahara = Big Data Processing Framework Provisioning - Senlin = Clustering - Solum = Software Development Lifecycle Automation - Searchlight = Indexing - Tacker = NFV Orchestration - Tricircle = Multi-Region Networking Automation - TripleO = Deployment - Trove = Database - Vitrage = Root Cause Analysis - Watcher = Optimization - Zaqar = Messaging - Zun = Containers [6] Configurations -------------- This section focuses on the configuration files and their settings for each OpenStack service. Common ~~~~~~ These are the generic INI configuration options for setting up different OpenStack services. Database ^^^^^^^^ Different database servers can be used by the API services on the controller nodes. - MariaDB/MySQL. The original ``mysql://`` connector can be used for the "MySQL-Python" library. Starting with Liberty, the newer "PyMySQL" library was added for Python 3 support. [7] RDO first added the required ``python2-PyMySQL`` package in the Pike release. [10][49] .. code-block:: ini [database] connection = mysql+pymysql://:@:/ - PostgreSQL. Requires the "psycopg2" Python library. [8] .. code-block:: ini [database] connection = postgresql://:@:/ - SQLite. .. code-block:: ini [database] connection = sqlite:///.sqlite - MongoDB is generally only used for Ceilometer when it is not using the Gnocchi back-end. [9] .. code-block:: ini [database] mongodb://:@:/ Messaging ^^^^^^^^^ For high availability and scalability, servers should be configured with a messaging agent. This allows a client's request to correctly be handled by the messaging queue and sent to one node to process that request. The configuration has been consolidated into the ``transport_url`` option. Multiple messaging hosts can be defined by using a comma before naming a virtual host. .. code-block:: ini transport_url = ://:@:,:@:/ Scenario #1 - RabbitMQ On the controller nodes, RabbitMQ needs to be installed. Then a user must be created with full privileges. .. code-block:: sh $ sudo rabbitmqctl add_user $ sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*" In the configuration file for every service, set the transport\_url options for RabbitMQ. A virtual host is not required. By default it will use ``/``. .. code-block:: ini [DEFAULT] transport_url = rabbit://:@/ [11][12] Ironic ~~~~~~ Drivers ^^^^^^^ Ironic supports different ways of managing power cycling of managed nodes. The default enabled driver is IPMITool. OpenStack Newton configuration: File: /etc/ironic/ironic.conf .. code-block:: ini [DEFAULT] enabled_drivers = OpenStack Queens configuration: .. code-block:: ini [DEFAULT] enabled_hardware_types = enabled_power_interfaces = enabled_management_interfaces = TripleO Queens configuration [55]: .. code-block:: yaml parameter_defaults: IronicEnabledHardwareTypes: - IronicEnabledPowerInterfaces: - IronicEnabledManagementInterfaces: - Supported Drivers: - CIMC: Cisco UCS servers (C series only). - iDRAC. - iLO: HPE ProLiant servers. - HP OneView. - IPMITool. - iRMC: FUJITSU PRIMERGY servers. - SNMP power racks. - UCS: Cisco UCS servers (B and C series). Each driver has different dependencies and configurations as outlined `here `__. Unsupported `Ironic Staging Drivers `__: - AMT - iBoot - Wake-On-Lan Unsupported Drivers: - MSFT OCS - SeaMicro - VirtualBox [75] Keystone ~~~~~~~~ API v3 ^^^^^^ In Mitaka, the Keystone v2.0 API has been deprecated. It will be removed entirely from OpenStack in the ``T`` release. [13] It is possible to run both v2.0 and v3 at the same time but it's desirable to move towards the v3 standard. If both have to be enabled, services should be configured to use v2.0 or else problems can occur with v3's domain scoping. For disabling v2.0 entirely, Keystone's API paste configuration needs to have these lines removed (or commented out) and then the web server should be restarted. File: /etc/keystone/keystone-paste.ini .. code-block:: ini [pipeline:public_api] pipeline = cors sizelimit url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension public_service [pipeline:admin_api] pipeline = cors sizelimit url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension s3_extension admin_service [composite:main] /v2.0 = public_api [composite:admin] /v2.0 = admin_api [14] Token Provider ^^^^^^^^^^^^^^ The token provider is used to create and delete tokens for authentication. Different providers can be configured. File: /etc/keystone/keystone.conf Scenario #1 - UUID (default) .. code-block:: ini [token] provider = uuid Scenario #2 - Fernet (recommended) This provides the fastest token creation and validation. A public and private key will need to be created for Fernet and the related Credential authentication. .. code-block:: ini [token] provider = fernet [fernet_tokens] key_repository = /etc/keystone/fernet-keys/ [credential] provider = fernet key_repository = /etc/keystone/credential-keys/ - Create the required keys: .. code-block:: sh $ sudo mkdir /etc/keystone/fernet-keys/ $ sudo chmod 750 /etc/keystone/fernet-keys/ $ sudo chown keystone.keystone /etc/keystone/fernet-keys/ $ sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone .. code-block:: sh $ sudo mkdir /etc/keystone/credential-keys/ $ sudo chmod 750 /etc/keystone/credential-keys/ $ sudo chown keystone.keystone /etc/keystone/credential-keys/ $ sudo keystone-manage credential_setup --keystone-user keystone --keystone-group keystone [15] TripleO Queens configuration [56]: Create the Fernet keys and save them to Swift .. code-block:: sh $ source ~/stackrc $ sudo keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ sudo tar -zcf keystone-fernet-keys.tar.gz /etc/keystone/fernet-keys $ upload-swift-artifacts -f keystone-fernet-keys.tar.gz --environment ~/templates/deployment-artifacts.yaml Verify that the object was saved to Swift and that the necessary environment template was generated. $ swift list overcloud-artifacts Keystone-fernet-keys.tar.gz $ cat ~/templates/deployment-artifacts.yaml Append the token provider setting to the "parameter_defaults" section in the "deployment-artifacts.yaml" file. Then use this file for the Overcloud deployment. .. code-block:: yaml parameter_defaults: controllerExtraConfig: keystone::token_provider: "fernet" Scenario #3 - PKI PKI tokens have been removed since the Ocata release. [16] .. code-block:: ini [token] provider = pki - Create the certificates. A new directory "/etc/keystone/ssl/" will be used to store these files. .. code-block:: sh $ sudo keystone-manage pki_setup --keystone-user keystone --keystone-group keystone Nova ~~~~ File: /etc/nova/nova.conf - For the controller nodes, specify the connection database connection strings for both the "nova" and "nova_api" databases. .. code-block:: ini [api_database] connection = //:@/nova_api [database] connection = //:@/nova - Enable support for the Nova API and Nova's metadata API. If "metadata" is specified here, then the "openstack-nova-api" will handle the metadata and not "openstack-nova-metadata-api." .. code-block:: ini [DEFAULT] enabled_apis = osapi_compute,metadata - Do not inject passwords, SSH keys, or partitions via Nova. This is recommended for Ceph storage back-ends. [20] This should be handled by the Nova's metadata service that will use cloud-init instead of Nova itself. This will either be "openstack-nova-api" or "openstack-nova-metadata-api" depending on the configuration. .. code-block:: ini [libvirt] inject_password = False inject_key = False inject_partition = -2 Hypervisors ^^^^^^^^^^^ Nova supports a wide range of virtualization technologies. Full hardware virtualization, paravirtualization, or containers can be used. Even Windows' Hyper-V is supported. File: Scenario #1 - KVM .. code-block:: ini [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = kvm Scenario #2 - Xen .. code-block:: ini [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = xen Scenario #3 - LXC .. code-block:: ini [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = lxc [17] CPU Pinning ^^^^^^^^^^^ - Verify that the processor(s) has hardware support for non-uniform memory access (NUMA). If it does, NUMA may still need to be turned on in the BIOS. NUMA nodes are the physical processors. These processors are then mapped to specific sectors of RAM. .. code-block:: sh $ sudo lscpu | grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-9,20-29 NUMA node1 CPU(s): 10-19,30-39 .. code-block:: sh $ sudo numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 20 21 22 23 24 25 26 27 28 29 node 0 size: 49046 MB node 0 free: 31090 MB node 1 cpus: 10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39 node 1 size: 49152 MB node 1 free: 31066 MB node distances: node 0 1 0: 10 21 1: 21 10 .. code-block:: sh $ sudo virsh nodeinfo | grep NUMA NUMA cell(s): 2 [18] - Append the NUMA filter "NUMATopologyFilter" to the Nova ``scheduler_default_filters`` key. File: /etc/nova/nova.conf .. code-block:: ini [DEFAULT] scheduler_default_filters = ,NUMATopologyFilter - Restart the Nova scheduler service on the controller node(s). .. code-block:: sh $ sudo systemctl restart openstack-nova-scheduler - Set the aggregate/availability zone to allow pinning. .. code-block:: sh $ openstack aggregate create $ openstack aggregate set --property pinned=true - Add the compute hosts to the new aggregate zone. .. code-block:: sh $ openstack host list | grep compute $ openstack aggregate host add - Modify a flavor to provide dedicated CPU pinning. There are three supported policies to use: - isolate = Use cores on the same physical processor. Do not allocate any threads. - prefer (default) = Cores and threads should be on the same physical processor. Fallback to using mixed cores and threads across different processors if there are not enough resources available. - require = Cores and threads must be on the same physical processor. .. code-block:: sh $ openstack flavor set --property hw:cpu_policy=dedicated --property hw:cpu_thread_policy= - Alternatively, set the CPU pinning properties on an image. .. code-block:: sh $ openstack image set --property hw_cpu_policy=dedicated --property hw_cpu_thread_policy= [19] Ceph ^^^^ Nova can be configured to use Ceph as the storage provider for the instance. This works with any QEMU and Libvirt based hypervisor. File: /etc/nova/nova.conf .. code-block:: ini [libvirt] images_type = rbd images_rbd_pool = images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = rbd_secret_uuid = disk_cachemodes="network=writeback" [20] Nested Virtualization ^^^^^^^^^^^^^^^^^^^^^ Nested virtualization allows virtual machines to run virtual machines inside of them. The kernel module must be stopped, the nested setting enabled, and then the module must be started again. Intel: .. code-block:: sh $ sudo rmmod kvm_intel $ echo “options kvm_intel nested=1” | sudo tee -a /etc/modprobe.d/kvm_intel.conf $ sudo modprobe kvm_intel AMD: .. code-block:: sh $ sudo rmmod kvm_amd $ echo “options kvm_amd nested=1” | sudo tee -a /etc/modprobe.d/kvm_amd.conf $ sudo modprobe kvm_amd - Use a hypervisor technology that supports nested virtualization such as KVM. File: /etc/nova/nova.conf .. code-block:: ini [libvirt] virt_type = kvm cpu_mode = host-passthrough [21] Neutron ~~~~~~~ Network Types ^^^^^^^^^^^^^ In OpenStack, there are two common scenarios for networks: ``provider`` and ``self-service``. Provider: - Simpler - Instances have direct access to a bridge device. - Faster - Best network for bare-metal machines. Self-service: - Complex - Instances network traffic is isolated and tunneled. - More available virtual networks - Required for Firewall-as-a-Service (FWaaS) Load-Balancing-as-a-Service (LBaaS) to work. [22] For software-defined networking, the recommended services to use for Neutron's Modular Layer 2 (ML2) plugin are Open vSwitch (OVS) or Open Virtual Networking (OVN). OVS is mature and tested. OVN is an extension of OVS that uses a new tunneling protocol named Geneve that is faster, more efficient, and allows for more tunnels than older protocols such as VXLAN. For compatibility purposes, it works with VXLAN but those tunnels lose the benefits that Geneve offers. It also provides more features such as built-in services for handling DHCP, NAT, load-balancing, and more. [51] Provider Networks ''''''''''''''''' Linux Bridge &&&&&&&&&&&& https://docs.openstack.org/neutron/queens/admin/deploy-lb-provider.html Open vSwitch &&&&&&&&&&&& https://docs.openstack.org/neutron/queens/admin/deploy-ovs-provider.html Self-Service Networks ''''''''''''''''''''' Linux Bridge &&&&&&&&&&&& https://docs.openstack.org/neutron/queens/admin/deploy-lb-selfservice.html Open vSwitch &&&&&&&&&&&& One device is required, but it is recommended to separate traffic onto two different network interfaces. There is ``br-vlan`` (sometimes also referred to as ``br-provider``) for internal tagged traffic and ``br-ex`` for external connectivity. .. code-block:: sh $ sudo ovs-vsctl add-br br-vlan $ sudo ovs-vsctl add-port br-vlan $ sudo ovs-vsctl add-br br-ex $ sudo ovs-vsctl add-port br-ex File: /etc/neutron/neutron.conf .. code-block:: ini [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True File: /etc/neutron/plugins/ml2/ml2\_conf.ini .. code-block:: ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population [ml2_type_vxlan] vni_ranges = , - The ``