RDO Community News

See also blogs.rdoproject.org

How to Install and Run Tempest

UPDATES: RDO Ocata introduces few changes that partially obsoletes the content of this article; check the updated article for details.

Tempest is a set of integration tests to run against an OpenStack cluster. In this blog I'm going to show you, how to install tempest from git repository, how to install all requirements and run tests against an OpenStack cluster.

I'm going to use a fresh installation of Centos7 and OpenStack cluster provided by packstack. If you've done that, follow the instructions below.

Tempest Installation

You have two options how to install tempest. You can install it through RPM or you can clone tempest from GitHub repository. If you choose installation through RPM, follow this link.

Installation from GitHub repository

Now you can clone upstream tempest or you can clone RedHat's fork of upstream tempest. The RedHat's fork provides config_tempest.py, which is a configuration tool. It will generate tempest.conf for you, what can be handy.

[1.] Install dependencies:

$ sudo yum install -y gcc python-devel libffi-devel openssl-devel

[2.] Clone tempest:

$ git clone https://github.com/openstack/tempest.git

Or (RedHat's fork):

$ git clone https://github.com/redhat-openstack/tempest.git

[3.] Install pip, for example:

$ curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
$ sudo python get-pip.py

[4.] Install tempest globally in the system. If you don't want to do that, skip this step and continue reading.

$ sudo pip install tempest/

Install tempest in a virtual environment

Sometimes you don't want to install things globally in the system. For this reason you may want to use a virtual environment. I'm going to explain installation through virtualenv and tox.

Setting up Tempest using virtualenv

[1.] Install virtualenv:

$ easy_install virtualenv

Or through pip:

$ pip install virtualenv

[2.] Enter tempest directory you've cloned before:

$ cd tempest/

[3.] Create a virtual environment and let's name it .venv:

$ virtualenv .venv
$ source ./venv/bin/activate

[4.] Install requirements:

(.venv) $ pip install -r requirements.txt
(.venv) $ pip install -r test-requirements.txt

NOTE: If problems occur during requirements installation, it may be due to an old version of pip, upgrade may help:

(.venv) $ pip install pip --upgrade

[5.] After dependencies are installed, run following commands, which install tempest within the virtual environment:

(.venv) $ cd ../
(.venv) $ pip install tempest/

Or this command does the same without using pip:

$ python setup.py install If you need to trigger installation to developer mode run:

(.venv) $ python setup.py develop `setup.py develop` comes from limitations on [pbr](http://docs.openstack.org/developer/pbr/). If you are interested, [here is](https://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode) an explanation of difference between `install` and `develop`.

Setting up Tempest using TOX

[1.] Install tox:

$ easy_install tox

Or if you want to use pip:

$ pip install tox

[2.] Install tempest:

$ tox -epy27 --notest
$ source .tox/py27/bin/activate This will create a virtual environment named `.tox`, install all dependencies (*requirements.txt* and *test-requirements.txt*) and tempest within it. If you check `tox.ini` file, you'll see tox actually run tempest installation in develop mode you could run manually as it was explained above.

Optional:

[3.] If you want to expose system-site packages, tox will do it for you. Deactivate environment, you are currently in (if you followed the step before) and create another environment:

(py27) $ deactivate
$ tox -eall-plugin --notest
$ source .tox/all-plugin/bin/activate

[4.] Then if you want to install plugins test packages based on the OpenStack Components installed, let this script to do it:

(all-plugin) $ sudo python tools/install_test_packages.py
(all-plugin) $ python setup.py develop

Generate tempest.conf

About tempest.conf and what it is used for you can read in this documentation. If you want to create tempest.conf let config_tempest.py do it for you. The tool is part of RPM tempest (check this documentation) or if you don’t want to install tempest globally, you can clone RedHat's tempest fork and install it within a virtual environment as it was explained above.

RedHat's tempest fork

Create a virtual environment as I already mentioned and source credentials (if you installed OpenStack cluster by packstack credentials are saved in /root/):

(.venv) $ source /root/keystone_admin

And run config tool:

(.venv) $ python tools/config_tempest.py --debug identity.uri $OS_AUTH_URL \
         identity.admin_password  $OS_PASSWORD --create  After this, `./etc/tempest.conf` is generated.

NOTE: If you running OSP, it’s needed to add a new argument to config_tempest tool:

(.venv) $ ./tools/config_tempest.py object-storage.operator_role swiftoperator

It's because OSP uses lowercase operator for the swift operator_role, however, tempest default value is "SwiftOperator". To override the default value run config_tool like this:

$ python tools/config_tempest.py --debug identity.uri $OS_AUTH_URL \
  identity.admin_password  $OS_PASSWORD \
  object-storage.operator_role swiftoperator --create

Running tests

If you've installed tempest and have tempest.conf, you cant start testing. To run tests you can use testr or ostestr. If you want to run tempest unit tests, check this out.

Note: following commands run withing the virtual environment you've created before. To run specific tests run for example:

$ python -m testtools.run tempest.api.volume.v2.test_volumes_list OR

$ ostestr --regex tempest.api.volume.v2.test_volumes_list

Alternatively you can use tox, for example:

$ tox -efull Run only tests tagged as smoke:

$ tox -esmoke
View article »

Chasing the trunk, but not too fast

As explained in a previous post, in RDO Trunk repositories we try to provide packages for new commits in OpenStack projects as soon as possible after they are merged upstream. This has a number of advantages:

  • It allows packagers to identify packaging issues just after introduced.
  • For project developers, their changes are tested in non-devstack environments, so test coverage is extended.
  • For deployment tools projects, they can use these repos to identify problems with new versions of packages and to start integrating any enhancement added to projects as soon as it's merged.
  • For operators, they can use these packages as hot-fixes to install in their RDO clouds before patches are included in official packages.

This means that, for every merged commit a new package is created and a yum repository is published in RDO trunk server. This repo includes the just built package and the latest builds for the rest of packages in the same release.

Initially, we applied this approach to every package included in RDO. However, while testing these repos during the Newton cycle we observed that jobs failed with errors that didn't affect OpenStack upstream gates. The reason behind this is that commits in OpenStack gate jobs are tested with versions of libraries and clients defined in upper-constraints.txt files in requirements project for the branch where the change is proposed. Typically these are the last tag-released versions. As RDO was testing using libraries from last commit instead of last release, we were effectively ahead of upstream tests, running too fast.

While this provided some interesting information and we could identify issues very early, it made very difficult to get stable repositories that could be promoted and used. After some discussions in RDO weekly meeting, it was decided to apply some changes in the way libraries are managed to leverage the work done in upstream gates but try to keep catching issues as soon as possible:

  • For master branch, it was decided to pin libraries and clients to the versions included in upper-constraints for repositories served in http://trunk.rdoproject.org/centos7. This repositories are used by RDO CI promotion jobs and marked as current-passed-ci when all tests succeed.
  • Additionally, a new builder was created that chases master in all packages, including libraries, clients, etc… This builder is able to catch issues based on unit tests executed when packages are created. The produced repos are available in http://trunk.rdoproject.org/centos7-master-head but promotion jobs are not executed using them.

The differences between master and master-head are shown in following diagram:

RDO master pins

  • For releases in maintenance phase we pin libraries to what's found in upper-constraints.txt file in the corresponding branch.

Implementation

In order to manage the libraries versions properly, RDO is using a peer-reviewed workflow of gerrit reviews proposed to the rdoinfo project in http://review.rdoproject.org. You can see an example here.

A job is executed periodically that automatically creates gerrit reviews when versions are updated in upper-constraints files. Manual approval is needed to get the changes merged and the new versions built by DLRN builders.

Read More »

Blog posts last week

We've had more followup blog posts from OpenStack Summit, along with some more from the RDO community.

Querying haproxy data using socat from CLI by Carlos Camacho

Currently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose.

Read more at http://tm3.org/c3

Keystone Domains are Projects by Adam Young

Yesterday, someone asked me about inherited role assignments in Keystone projects. Here is what we worked out.

Read more at http://tm3.org/c4

OpenStack Summit: An evening with Ceph and RDO by Rich Bowen

Last Tuesday in Barcelona, we gathered with the Ceph community for an evening of food, drinks, and technical sessions.

Read more at http://tm3.org/c5

OpenStack Summit Barcelona, 3 of N by rbowen

Continuing the saga of OpenStack Summit Barcelona …

Read more at http://tm3.org/c6

Red Hat Virtualization: Bridging the Gap with the Cloud and Hyperconverged Infrastructure by Ted Brunell

Red Hat Virtualization offers a flexible technology for high-intensive performance and secure workloads. Red Hat Virtualization 4.0 introduced new features that enable customers to further extend the use case of traditional virtualization in hybrid cloud environments. The platform now easily incorporates third party network providers into the existing environment along with other technologies found in next generation cloud platforms such as Red Hat OpenStack Platform and Red Hat Enterprise Linux Atomic Host. Additionally, new infrastructure models are now supported including selected support for hyperconverged infrastructure; the native integration of compute and storage across a cluster of hosts in a Red Hat Virtualization environment.

Read more at http://tm3.org/c7

Running Tempest on RDO OpenStack Newton by chandankumar

Tempest is a set of integration tests to run against an OpenStack cluster.

Read more at http://tm3.org/bk

View article »

OpenStack Summit: An evening with Ceph and RDO

Last Tuesday in Barcelona, we gathered with the Ceph community for an evening of food, drinks, and technical sessions.

There were 215 in attendance at last count, and we had 12 presentations, from members of both communities.

Alfredo

A huge thank you to all of the speakers, and to all of the people who turned out for this great evening.

pool

More photos HERE.

Some of the presentations from the event are available HERE

View article »

Blog posts last week

With OpenStack Summit last week, we have a lot of summit-focused blog posts today, and expect more to come in the next few days.

Attending OpenStack Summit Ocata by Julien Danjou

For the last time in 2016, I flew out to the OpenStack Summit in Barcelona, where I had the chance to meet (again) a lot of my fellow OpenStack contributors there.

Read more at http://tm3.org/bu

OpenStack Summit, Barcelona, 2 of n by rbowen

Tuesday, the first day of the main event, was, as always, very busy. I spent most of the day working the Red Hat booth. We started at 10 setting up, and the mob came in around 10:45.

Read more at http://tm3.org/bx

OpenStack Summit, Barcelona, 1 of n by rbowen

I have the best intentions of blogging every day of an event. But every day is always so full, from morning until the time I drop into bed exhausted.

Read more at http://tm3.org/by

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.

Read more at http://tm3.org/bo

Integrating Red Hat OpenStack 9 Cinder Service With Multiple External Red Hat Ceph Storage Clusters by Keith Schincke

This post describes how to manually integrate Red Hat OpenStack 9 (RHOSP9) Cinder service with multiple pre-existing external Red Hat Ceph Storage 2 (RHCS2) clusters. The final configuration goals are to have Cinder configuration with multiple storage backends and support …

Read more at http://tm3.org/bz

On communities: Sometimes it's better to over-communicate by Flavio Percoco

Communities, regardless of their size, rely mainly on the communication there is between their members to operate. The existing processes, the current discussions, and the future growth depend heavily on how well the communication throughout the community has been established. The channels used for these conversations play a critical role in the health of the communication (and the community) as well.

Read more at http://tm3.org/c0

Full Stack Automation with Ansible and OpenStack by Marcos Garcia - Principal Technical Marketing Manager

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.

Read more at http://tm3.org/bs

View article »

Next week in Barcelona

Join us next week in Barcelona for OpenStack Summit. We'll be gathering from around the world to celebrate the Newton release, and plan for the Ocata cycle.

RDO will have a table in the Red Hat booth, where we'll be answering your questions about RDO. And we'll have ducks, as usual.

duck

On Tuesday evening, join us for an evening with RDO and Ceph, with technical presentations about both projects, as well as drinks and light snacks.

And, throughout the week, RDO enthusiasts are giving a wide variety of talks about all things OpenStack.

If you're using RDO, please stop by and tell us about it. We'd love to meet you, and find out what we, as a project, can do better for you and your organization.

See you in Barcelona!

View article »

RDO blog posts this week

Here's what RDO enthusiasts have been blogging about in the last few days.

RDO Newton Released by Rich Bowen

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Newton for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Newton is the 14th release from the OpenStack project, which is the work of more than 2700 contributors from around the world (source).

Read more at http://tm3.org/bm

How to run Rally on Packstack environment by mkopec

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. For OpenStack deployment I used packstack tool.

Read more at http://tm3.org/bn

TripleO Composable Services 101 by Steve Hardy

Over the newton cycle, we've been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible "Composable Services" pattern is followed throughout our implementation.It's been a lot of work, but it's been a frequently requested feature for some time, so I'm excited to be in a position to say it's complete for Newton (kudos to everyone involved in making that happen!) :)This post aims to provide an introduction to this work, an overview of how it works under the hood, some simple usage examples and a roadmap for some related follow-on work.

Read more at http://tm3.org/8b

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.The final piece of the composability model we've been developing this cycle is the ability to deploy user-defined custom roles, in addition to (or even instead of) the built in TripleO roles (where a role is a group of servers, e.g "Controller", which runs some combination of services).What follows is an overview of this new functionality, the primary interfaces, and some usage examples and a summary of future planned work.

Read more at http://tm3.org/bo

Ceph/RDO meetup in Barcelona at OpenStack Summit by Rich Bowen

If you'll be in Barcelona later this month for OpenStack Summit, join us for an evening with RDO and Ceph.

Read more at http://tm3.org/bp

Translating Between RDO/RHOS and upstream releases Redux by Adam Young

I posted this once before, but we’ve moved on a bit since then. So, an update.

Read more at http://tm3.org/bq

View article »

RDO Newton Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Newton for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Newton is the 14th release from the OpenStack project, which is the work of more than 2700 contributors from around the world (source).

The RDO community project curates, packages, builds, tests, and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds. At latest count, RDO contains 1157 packages.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try the All-In-One Quickstart. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you'll be running a production cloud in short order.

Finally, if you want to try out OpenStack, but don't have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we're there too, and also Google+.

And, if you're going to be in Barcelona for the OpenStack Summit two weeks from now, join us on Tuesday evening at the Barcelona Princess, 5pm - 8pm, for an evening with the RDO and Ceph communities. If you can't make it in person, we'll be streaming it on YouTube.

View article »

How to run Rally on Packstack environment

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. For OpenStack deployment I used packstack tool.

Install Rally

[1.] Install rally:

$ sudo yum install openstack-rally

[2.] After the installation is complete set up the Rally database:

$ sudo rally-manage db recreate

Register an OpenStack deployment

You have to provide Rally with an OpenStack deployment it is going to benchmark. To do that, we're going to use keystone configuration file generated by packstack installation.

[1.] Evaluate the configuration file:

$ source keystone_admin

[2.] Create rally deployment and let's name it "existing"

$ rally deployment create --fromenv --name=existing
+--------------------------------------+----------------------------+----------+------------------+--------+
| uuid                                 | created_at                 | name     | status           | active |
+--------------------------------------+----------------------------+----------+------------------+--------+
| 6973e349-739e-41af-947a-34230b7383f8 | 2016-10-05 08:24:27.939523 | existing | deploy->finished |        |
+--------------------------------------+----------------------------+----------+------------------+--------+

[3.] You can verify that your current deployment is healthy and ready to be benchmarked by the deployment check command:

$ rally deployment check
+-------------+--------------+-----------+
| services    | type         | status    |
+-------------+--------------+-----------+
| ceilometer  | metering     | Available |
| cinder      | volume       | Available |
| glance      | image        | Available |
| gnocchi     | metric       | Available |
| keystone    | identity     | Available |
| neutron     | network      | Available |
| nova        | compute      | Available |
| swift       | object-store | Available |
+-------------+--------------+-----------+

Run Rally

The sequence of benchmarks to be launched by Rally should be specified in a benchmark task configuration file (either in JSON or in YAML format). Let's create one of the sample benchmark task, for example task for boot and delete server.

[1.] Create a new file and name it boot-and-delete.json

[2.] Copy this to the boot-and-delete.json file:

{% set flavor_name = flavor_name or "m1.tiny" %}
{% set image_name = image_name or "cirros" %}
{
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "{{flavor_name}}"
                },
                "image": {
                    "name": "{{image_name}}"
                },
                "force_delete": false
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                }
            }
        },
        {
            "args": {
                "flavor": {
                    "name": "{{flavor_name}}"
                },
                "image": {
                    "name": "{{image_name}}"
                },
                "auto_assign_nic": true
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                },
                "network": {
                    "start_cidr": "10.2.0.0/24",
                    "networks_per_tenant": 2
                }
            }
        }
    ]
}

[3.] Run the task:

$ rally task start boot-and-delete.json

After successfull ran you'll see information such as: Task ID, Response Times, duration, … Note that the Rally input task above uses cirros as image name and 'm1.tiny' as flavor name. If this benchmark task fails, then the reason for that might be a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the rally show command:

$ rally show images
$ rally show flavors

More about Rally tasks templates can be found on Rally documentation

View article »

Ceph/RDO meetup in Barcelona at OpenStack Summit

If you'll be in Barcelona later this month for OpenStack Summit, join us for an evening with RDO and Ceph.

Tuesday evening, October 25th, from 5 to 8pm (17:00 - 20:00) we'll be at the Barcelona Princess, right across the road from the Summit venue. We'll have drinks, light snacks, and presentations from both Ceph and RDO.

If you can't make it in person, we'll also be streaming the event on YouTube

Topics we expect to be covered include (not necessarily in this order):

  • RDO release status (aarch64, repos, workflow)
  • RDO repos overview (CBS vs Trunk, and what goes where)
  • RDO and Ceph (maybe TripleO and Ceph?)
  • Quick look at new rpmfactory workflow with rdopkg
  • CI in RDO - what are we testing?
  • CERN – How to replace several petabytes of Ceph hardware without downtime
  • Ceph at SUSE
  • Ceph on ARM
  • 3D Xpoint & 3D NAND with OpenStack and Ceph
  • Bioinformatics – Openstack and Ceph used in large scale cancer research projects

If you expect to be at the event, please consider signing up on Eventbrite so we have an idea of how many people to expect. Thanks!

View article »