Upgrading Individual OpenStack Services (Live Compute) in a Standard Environment
This section describes the steps you should follow to upgrade your cloud deployment by updating one service at a time with Live Compute in a non High Availability (HA) environment. This scenario upgrades from Newton to Ocata in environments that do not use TripleO.
A live Compute upgrade minimizes interruptions to your Compute service, with only a few minutes for the smaller services, and a longer migration interval for the workloads moving to newly-upgraded Compute hosts. Existing workloads can run indefinitely, and you do not need to wait for a database migration.
|Due to certain package dependencies, upgrading the packages for one OpenStack service might cause Python libraries to upgrade before other OpenStack services upgrade. This might cause certain services to fail prematurely. In this situation, continue upgrading the remaining services. All services should be operational upon completion of this scenario.|
|This method may require additional hardware resources to bring up the Compute nodes.|
On each node, install the Ocata release repository.
# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum install -y centos-release-openstack-ocata
Make sure that any previous release repositories are disabled. For example:
# yum-config-manager --disable centos-release-openstack-newton
Before updating, take a
systemd snapshot of the OpenStack services:
# systemctl snapshot openstack-services
Disable the main OpenStack services:
# systemctl stop 'openstack-*' # systemctl stop 'neutron-*' # systemctl stop 'openvswitch'
# yum upgrade openstack-selinux
This is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.
Upgrading Identity (keystone) and Dashboard (horizon)
Disable the Identity service and the Dashboard WSGI applets:
# systemctl stop httpd
Update the packages for both services:
# yum -d1 -y upgrade \*keystone\* # yum -y upgrade \*horizon\* \*openstack-dashboard\* # yum -d1 -y upgrade \*horizon\* \*python-django\*
It is possible that the Identity service’s token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade. To flush expired tokens from the database and alleviate the problem, the
keystone-manage command can be used before running the Identity database upgrade.
# keystone-manage token_flush # su -s /bin/sh -c "keystone-manage db_sync" keystone
This flushes expired tokens from the database. You can arrange to run this command periodically using
# systemctl start httpd
Upgrading Object Storage (swift)
On your Object Storage hosts, run:
# systemctl stop '*swift*' # yum -d1 -y upgrade \*swift\* # systemctl start openstack-swift-account-auditor \ openstack-swift-account-reaper \ openstack-swift-account-replicator \ openstack-swift-account \ openstack-swift-container-auditor \ openstack-swift-container-replicator \ openstack-swift-container-updater \ openstack-swift-container \ openstack-swift-object-auditor \ openstack-swift-object-replicator \ openstack-swift-object-updater \ openstack-swift-object \ openstack-swift-proxy
Upgrading Image Service (glance)
On your Image Service host, run:
# systemctl stop '*glance*' # yum -d1 -y upgrade \*glance\* # su -s /bin/sh -c "glance-manage db_sync" glance # systemctl start openstack-glance-api \ openstack-glance-registry
Upgrading Block Storage (cinder)
On your Block Storage host, run:
# systemctl stop '*cinder*' # yum -d1 -y upgrade \*cinder\* # su -s /bin/sh -c "cinder-manage db sync" cinder # systemctl start openstack-cinder-api \ openstack-cinder-scheduler \ openstack-cinder-volume
Upgrading Orchestration (heat)
On your Orchestration host, run:
# systemctl stop '*heat*' # yum -d1 -y upgrade \*heat\* # su -s /bin/sh -c "heat-manage db_sync" heat # systemctl start openstack-heat-api-cfn \ openstack-heat-api-cloudwatch \ openstack-heat-api \ openstack-heat-engine
Upgrading Telemetry (ceilometer)
On all nodes hosting Telemetry component services, run:
# systemctl stop '*ceilometer*' # systemctl stop '*aodh*' # systemctl stop '*gnocchi*' # yum -d1 -y upgrade \*ceilometer\* \*aodh\* \*gnocchi\*
On the Controller node, where database is installed, run:
# ceilometer-dbsync # aodh-dbsync # gnocchi-upgrade
After completing the package upgrade, restart the Telemetry service by running the following command on all nodes hosting Telemetry component services:
# systemctl start openstack-ceilometer-api \ openstack-ceilometer-central \ openstack-ceilometer-collector \ openstack-ceilometer-notification \ openstack-aodh-evaluator \ openstack-aodh-listener \ openstack-aodh-notifier \ openstack-gnocchi-metricd \ openstack-gnocchi-statsd
Upgrading Compute (nova)
If you are performing a rolling upgrade of your Compute hosts, you need to set explicit API version limits to ensure compatibility in your environment.
Before starting Compute services on Controller or Compute nodes, set the
computeoption in the
nova.confto the previous OpenStack version (
# crudini --set /etc/nova/nova.conf upgrade_levels compute newton
You need to make this change on your Controller and Compute nodes.
You should undo this operation after upgrading all of your Compute nodes.Note
crudinicommand is not available, install the
# yum install crudini
On your Controller and Compute nodes, run:
# systemctl stop '*nova*' # yum -d1 -y upgrade \*nova\*Note
Unless stated otherwise, run the following commands on your Controller host.
In Ocata, the Compute service needs at least the
cell0database and one Cell V2 cell database, which you can name, for example,
If you have not created the
cell0database in your Newton environment, you need to create it in Ocata, similarly to how you created other nova databases in the past. The database name should follow the form of
novais the name of the existing nova database.
After creating the
cell0database, create a Placement service user. For example:
$ openstack user create --domain default --password-prompt placement
Add the appropriate role to the Placement service user. For example:
$ openstack role add --project service --user placement admin
Create the Placement API entry. For example:
$ openstack service create --name placement --description "Placement API" placement
Create the appropriate Placement API service endpoints. For example:
$ openstack endpoint create --region RegionOne placement public http://controller:8778 $ openstack endpoint create --region RegionOne placement internal http://controller:8778 $ openstack endpoint create --region RegionOne placement admin http://controller:8778
Make sure that you have the required Placement API package installed:
# yum install openstack-nova-placement-api
Configure the Placement API in the
[placement]section of the
/etc/nova/nova.confon both the Controller and Compute nodes.
[placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = MyPlacementPassword
Create and map the
cell0database with Cells V2:
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
cell0database by updating the
# su -s /bin/sh -c "nova-manage db sync" nova
# su -s /bin/sh -c "nova-manage --config-file /etc/nova/nova.conf cell_v2 create_cell --name=cell1 --verbose" nova
Run this command for each cell in your environment.
Discover Compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" novaNote
If you add more Compute hosts in your environment, remember to run the
List registered cells to get their UUIDs:
# nova-manage cell_v2 list_cells
Pass the UUID for cell1 to the
map_instancescommand to map instances to cells:
# su -s /bin/sh -c "nova-manage cell_v2 map_instances --cell_uuid <cell1 UUID>" nova
# su -s /bin/sh -c "nova-manage api_db sync" nova
After you have upgraded all of your hosts, you will want to remove the API limits configured in the previous step. On all of your hosts:
# crudini --del /etc/nova/nova.conf upgrade_levels compute
Restart the Compute service on all Controller nodes:
# systemctl start openstack-nova-api \ openstack-nova-conductor \ openstack-nova-consoleauth \ openstack-nova-novncproxy \ openstack-nova-scheduler
Restart the Compute service on all Compute nodes:
# systemctl start openstack-nova-compute
Upgrading Clustering (sahara)
On all nodes hosting Clustering component services, run:
# systemctl stop '*sahara*' # yum -d1 -y upgrade \*sahara\*
On the Controller node, where database is installed, run:
# su -s /bin/sh -c "sahara-db-manage upgrade heads" sahara
After completing the package upgrade, restart the Clustering service by running the following command on all nodes hosting Clustering component services:
# systemctl start openstack-sahara-api \ openstack-sahara-engine
Upgrading OpenStack Networking (neutron)
On your OpenStack Networking host, run:
# systemctl stop '*neutron*' # yum -d1 -y upgrade \*neutron\*
On the same host, update the OpenStack Networking database schema:
# su -s /bin/sh -c "neutron-db-manage upgrade heads" neutron
Restart the OpenStack Networking service:
# systemctl start neutron-dhcp-agent \ neutron-l3-agent \ neutron-metadata-agent \ neutron-openvswitch-agent \ neutron-server
|Start any additional OpenStack Networking services enabled in your environment.|
After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:
# yum upgrade
This will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries.
Review the resulting configuration files. The upgraded packages will have installed
.rpmnew files appropriate to the Ocata version of the service.
New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service, see Configuration Reference available from http://docs.openstack.org/ocata/config-reference.