I haven't done an update in a few weeks. Here are some of the blog posts from our community in the last few weeks.
Red Hat joins the DPDK Project by Marcos Garcia - Principal Technical Marketing Manager
Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.
OpenStack Ocata has now been out for a little over a month – https://releases.openstack.org/ – and we’re about to see the first milestone of the Pike release. Past cycles show that now’s about the time when people start looking at the new release to see if they should consider moving to it. So here’s a quick overview of what’s new in this release.
The journey of a new OpenStack service in RDO by amoralej
When new contributors join RDO, they ask for recommendations about how to add new services and help RDO users to adopt it. This post is not a official policy document nor a detailed description about how to carry out some activities, but provides some high level recommendations to newcomers based on what I have learned and observed in the last year working in RDO.
InfraRed: Deploying and Testing Openstack just made easier! by bregman
Deploying and testing OpenStack is very easy. If you read the headline and your eyebrows raised, you are at the right place. I believe that most of us, who experienced at least one deployment of OpenStack, will agree that deploying OpenStack can be a quite frustrating experience. It doesn’t matter if you are using it for […]
Steve Hardy talks about TripleO in the Ocata release, at the Openstack PTG in Atlanta.
Steve: My name is Steve Hardy. I work primarily on the TripleO project, which is an OpenStack deployment project. What makes TripleO interesting is that it uses OpenStack components primarily in order to deploy a production OpenStack cloud. It uses OpenStack Ironic to do bare metal provisioning. It uses Heat orchestration in order to drive the configuration workflow. And we also recently started using Mistral, which is an OpenStack workflow component.
So it's kind of different from some of the other deployment initiatives. And it's a nice feedback loop where we're making use of the OpenStack services in the deployment story, as well as in the deployed cloud.
This last couple of cycles we've been working towards more composability. That basically means allowing operators more flexibility with service placement, and also allowing them to define groups of node in a more flexible way so that you could either specify different configurations - perhaps you have multiple types of hardware for different compute configurations for Nova, or perhaps you want to scale services into particular groups of clusters for particular services.
It's basically about giving more choice and flexibility into how they deploy their architecture.
Rich: Upgrades have long been a pain point. I understand there's some improvement in this cycle there as well?
Steve: Yes. Having delivered composable services and composable roles for the Newton OpenStack release, the next big challenge was giving operators the flexibility to deploy services on arbitrary nodes in your OpenStack environment, you need some way to upgrade, and you can't necessarily make assumptions about which service is running on which group of nodes. So we've implented the new feature which is called composable upgrades. That uses some Heat functionality combined with Ansible tasks, in order to allow very flexible dynamic definition of what upgrade actions need to take place when you're upgrading some specific group of nodes within your environment. That's part of the new Ocata release. It's hopefully going to provide a better upgrade experience, for end-to-end upgrades of all the OpenStack services that TripleO supports.
Rich: It was a very short cycle. Did you get done what you wanted to get done, or are things pushed off to Pike now.
Steve: I think there's a few remaining improvements around operator-driven upgrades, which we'll be looking at during the Pike cycle. It certainly has been a bit of a challenge with the short development timeframe during Ocata. But the architecture has landed, and we've got composable upgrade support for all the services in Heat upstream, so I feel like we've done what we set out to do in this cycle, and there will be further improvements around operator-drive upgrade workflow and also containerization during the Pike timeframe.
Rich: This week we're at the PTG. Have you already had your team meetings, or are they still to come.
Steve: The TripleO team meetings start tomorrow, which is Wednesday. The previous two days have mostly been cross-project discussion. Some of which related to collaborations which may impact TripleO features, some of which was very interesting. But the TripleO schedule starts tomorrow - Wednesday and Thursday. We've got a fairly packed agenda, which is going to focus around - primarily the next steps for upgrades, containerization, and ways that we can potentially collaborate more closely with some of the other deployment projects within the OpenStack community.
Rich: Is Kolla something that TripleO uses to deploy, or is that completely unrelated?
Steve: The two projects are collaborating. Kolla provides a number of components, one of which is container definitions for the OpenStack services themselves, and the containerized TripleO architecture actually consumes those. There are some other pieces which are different between the two projects. We use Heat to orchestrate container deployment, and there's an emphasis on Ansible and Kubernetes on the Kolla side, where we're having discussions around future collaboration.
There's a session planned on our agenda for a meeting between the Kolla Kubernetes folks and TripleO folks to figure out of there's long-term collaboration there. But at the moment there's good collaboration around the container definitions and we just orchestrate deploying those containers.
We'll see what happens in the next couple of days of sessions, and getting on with the work we have planned for Pike.
Note that this installs nodepool version 0.4.0, which relies on Gearman and
still supports snapshot based images. More recent versions of Nodepool require
a Zookeeper service and only support diskimage builder images. Even though the
usage is similar and easy to adapt.
Configure a cloud provider
Nodepool uses os-client-config to define cloud providers and it needs
a clouds.yaml file like this:
Nodepool uses a gearman server to get node requests and to dispatch
image rebuild jobs. We'll uses a local gearmand server on localhost.
Thus, Nodepool will only respect the min-ready value and it won't
dynamically start node.
Diskimages define images' names and dib elements. All the elements
provided by dib, such as centos-minimal, are available, here is the
Providers define specific cloud provider settings such as the network name or
boot timeout. Lastly, labels define generic names for cloud images
to be used by jobs definition.
To sum up, labels reference images in providers that are constructed
Nodepool will automatically initiate the image build, as shown in
/var/log/nodepool/nodepool.log: WARNING nodepool.NodePool: Missing disk image centos-7.
Image building logs are available in /var/log/nodepool/builder-image.log.
Check the building process:
# nodepool dib-image-list
| ID | Image | Filename | Version | State | Age |
| 1 | dib-centos-7 | /var/lib/nodepool/dib/dib-centos-7-1490688700 | 1490702806 | building | 00:00:00:05 |
Once the dib image is ready, nodepool will upload the image:
nodepool.NodePool: Missing image centos-7 on default
When the image fails to build, nodepool will try again indefinitely,
look for "after-error" in builder-image.log.
Check the upload process:
# nodepool image-list
| ID | Provider | Image | Hostname | Version | Image ID | Server ID | State | Age |
| 1 | default | centos-7 | centos-7 | 1490703207 | None | None | building | 00:00:00:43 |
Once the image is ready, nodepool will create an instance
nodepool.NodePool: Need to launch 1 centos-7 nodes for default on default:
# nodepool list
| ID | Provider | AZ | Label | Target | Manager | Hostname | NodeName | Server ID | IP | State | Age |
| 1 | default | None | centos-7 | default | None | centos-7-default-1 | centos-7-default-1 | XXX | None | building | 00:00:01:37 |
Once the node is ready, you have completed the first part of the process
described in this article and the Nodepool service should be working properly.
If the node goes directly from the building to the delete state, Nodepool will
try to recreate the node indefinitely. Look for errors in nodepool.log.
One common mistake is to have an incorrect provider network configuration,
you need to set a valid network name in nodepool.yaml.
Here is a summary of the most common operations:
Force the rebuild of an image: nodepool image-build image-name
Force the upload of an image: nodepool image-upload provider-name image-name
Delete a node: nodepool delete node-id
Delete a local dib image: nodepool dib-image-delete image-id
Delete a glance image: nodepool image-delete image-id
Nodepool "check" cron periodically verifies that nodes are available.
When a node is shutdown, it will automatically recreate it.
Ready to use application deployment with Nodepool
As a Cloud developper, it is convenient to always have access to a fresh
OpenStack deployment for testing purpose. It's easy to break things and it
takes time to recreate a test environment, so let's use Nodepool.
First we'll add a new elements
to pre-install the typical rdo requirements:
At the OpenStack PTG in February, Stephen Finucane speaks about what's new in Nova in the Ocata release of OpenStack.
Stephen: I'm Stephen Finucane, and I work on Nova for Red Hat.
I've previously worked at Intel. During most of my time working on Nova I've been focused on the same kind of feature set, which is what Intel liked to call EPA - Enhanced Platform Awareness - or NFV applications. Making Nova smarter from the perspective of Telco applications. You have all this amazing hardware, how do you expose that up and take full advantage of that when you're running virtualized applications?
The Ocata cycle was a bit of an odd one for me, and probably for the project itself, because it was really short. The normal cycle runs for about six months. This one ran for about four.
During the Ocata cycle I actually got core status. That was probably as a result of doing a lot of reviews. Lot of reviews, pretty much every waking hour, I had to do reviews. And that was made possible by the fact that I didn't actually get any specs in for that cycle.
So my work on Nova during that cycle was mostly around reviewing Python 3 fixes. It's still very much a community goal to get support in Python 3. 3.5 in this case. Also a lot of work around improving how we do configuration - making it so that administrators can actually understand what different knobs and dials Nova exposes, what they actually mean, and what the implications of changing or enabling them actually are.
Both of these have been going in since before the Ocata cycle, and we made really good progress during the Ocata cycle to continue to get ourselves 70 or 80% of the way there, and in the case of config options, the work is essentially done there at this point.
Outside of that, the community as a whole, most of what went on this cycle was again a continuation of work that has been going on the last couple cycles. A lot of focus on the maturity of Nova. Not so much new features, but improving how we did existing features. A lot of work on resource providers, which are a way that we can keep track of the various resources that Nova's aware of, be they storage, or cpu, or things like that.
Coming forward, as far as Pike goes, it's still very much up in the air. That's what we're here for this week discussing. There would be, from my perspective, a lot of the features that I want to see, doubling down on the NFV functionality that Nova supports. Making things
like SR-IOV easier to use, and more performant, where possible. There's also going to be some work around resource providers again for SR-IOV and NFV features and resources that we have.
The other stuff that the community is looking at, pretty much up in the air. The idea of exposing capabilities, something that we've had a lot of discussion about already this week, and I epxect we'll have a lot more. And then, again, evolution of the Nova code base - what more features the community wants, and various customers want - going and providing those.
This promises to be a very exciting cycle, on account of the fact that we're back into the full six month mode. There's a couple of new cores on board, and Nova itself is full steam ahead.
At the OpenStack PTG last month, Zane Bitter speaks about his work on OpenStack Heat in the Ocata cycle, and what comes next.
Rich: Tell us who you are and what you work on.
Zane: My name is Zane Bitter, and I work at Red Hat on Heat … mostly on Heat. I'm one of the original Heat developers. I've been working on the project since 2012 when it started.
Heat is the orchestration service for OpenStack. It's about managing how you create and maintain your resources that you're using in your OpenStack cloud over time. It manages dependencies between various things you have to spin up, like servers, volumes, networks, ports, all those kinds of things. It allows you to define in a declarative way what resources you want and it does the job of figuring out how to create them in the right order and do it reasonably efficiently. Not waiting too long between creating stuff, but also making sure you have all the dependencies, in the right order.
And then it can manage those deployments over time as well. If you want to change your thing, it can figure out what you need do to change, if you need to replace a resource, what it needs to do to replace a resource, and get everything pointed to the right things again.
Rich: What is new in Ocata? What have you been working on in this cycle?
Zane: What I've been working on in Ocata is having a way of auto-healing services. If your service dies for some reason, you'd like that to recover by itself, rather than having to page someone and say, hey, my service is down, and then go in there and manually fix things up. So I've been working on integration between a bunch of different services, some of which started during the previous cycle.
I was working with Fei Long Wang from Catalyst IT who is PTL of Zaqar, getting some integration work between Zaqar and Mistral, so you can now trigger a Mistral workflow from a message on the Zaqar queue. So if you set that up as a subscription in Zaqar, it can fire off a thing when it gets a message on that queue, saying, hey, Mistral, run this workflow.
That in turn is integrated with Aodh - ("A.O.D.H". as, some people call it. I'm told the correct pronunciation is Aodh.) - which is the alarming service for OpenStack. It can …
Rich: For some reason, I thought it was an acronym.
Zane: No, it's an Irish name.
Rich: That's good to know.
Zane: Eoghan Glynn was responsible for that one.
You can set up the alarm action for an alarm in Aodh to be to post a message to this queue. When you combine these together, that means that when an alarm goes off, it posts a message to a queue, and that can trigger a workflow.
What I've been working on in Ocata is getting that all packaged up into Heat templates so we have all the resources to create the alarm in Aodh, hook it up with the subscription … hook up the Zaqar queue to a Mistral subscription, and have that all configured in a template along with the workflow action, which is going to call Heat, and say, this server is unhealthy now. We know from external to Heat, we know that this server is bad, and then kick off the action which is to mark the server unhealthy. We then create a replacement, and then when that service is back up, we remove the old one.
Rich: Is that done, or do you still have stuff to do in Pike.
Zane: It's done. It's all working. It's in the Heat templates repository, there's an example in there, so you can try that out. There's a couple caveats. There's a missfeature in Aodh - there's a delay between when you create the alarm and when … there's a short period where, when an event comes in, it may not trigger an alarm. That's one caveat. But other than that, once it's up and working it works pretty reliably.
The other thing I should mention is that you have to turn on event alarms in Aodh, which is basically triggering alarms off of events in the … on the Oslo messaging notification bus, which is not on by default, but it's a one line configuration change.
Rich: What can we look forward to in Pike, or is it too early in the week to say yet?
Zane: We have a few ideas for Pike. I'm planning to work on a template where … so, Zaqar has pre-signed URLs, so you can drop a pre-signed URL into an instance, and allow that instance … node server, in other words … to post to that Zaqar queue without having any Keystone credentials, and basically all it can do with that URL is post to that one queue. Similar to signed URLs in ____. What that should enable us to do is create a template where we're putting signed URLs, with an expiry, into a server, and then we can, before that expires, we can re-create it, so we can have updating credentials, and hook that up to a Mistral subscription, and that allows the service to kick off a Mistral work flow to do something the application needs to do, without having credentials for anything else in OpenStack. So you can let both Mistral and Heat use Keystone trusts, to say, I will offer it on behalf of the user who created this workflow. So if we can allow them to trigger that through Zaqar, there's a pretty secure way of giving applications access to modify stuff in the OpenStack cloud, but locking it down to only the stuff you want modified, and not risking that if someone breaks into your VM, they've got your Keystone credentials and can do whatever they want withour account.
That's one of the things I'm hoping to work on.
As well, we're continuing with Heat development. We've switched over to the new convergence architecture. In Newton, I think, was the first release to have that on by default. We're looking at improving performance with that now. We've got the right architecture for scaling out to a lot of Heat engines. Right now, it's a little heavy on database, a little heavy on memory, which is the tradeoff you make when you go from a monolithic architecture, which can be quite efficient, but doesn't scale out well, to, you scale out but there's potentially performance problems. I think there's some low-hanging fruit there, we should be able to crank up performance. Memory use, and database accesses. Look for better performance out of the convergence architecture in Heat, coming up in Pike.
When new contributors join RDO, they ask for recommendations about
how to add new services and help RDO users to adopt it. This post is
not a official policy document nor a detailed description about how to carry
out some activities, but provides some high level recommendations to newcomers
based on what I have learned and observed in the last year working in RDO.
Note that you are not required to follow all these steps and even you can
have your own ideas about it. If you want to discuss it, let us know your thoughts, we are always open to improvements.
1. Adding the package to RDO
The first step is to add the package(s) to RDO repositories as shown
in RDO documentation.
This tipically includes the main service package, client library and maybe
a package with a plugin for horizon.
In some cases new packages require some general purpose libraries. If they
are not in CentOS base channels, RDO imports them from Fedora packages
into a dependencies repository. If you need a new dependency which already
exists in Fedora, just let us know and we'll import it into the repo. If it
doesn't exist, you'll have to add the new package into Fedora following
the existing process.
2. Create a puppet module
Although there are multiple deployment tools for OpenStack based on several
frameworks, puppet is widely used by different tools or even directly
by operators so we recommend to create a puppet module to deploy your new service
following the Puppet OpenStack Guide.
Once the puppet module is ready, remember to follow the RDO new package
to get it packaged in the repos.
3. Make sure the new service is tested in RDO-CI
As explained in a previous post
we run several jobs in RDO CI to validate the content of our repos. Most
of the times the first way to get it tested is by adding the new service
to one of the puppet-openstack-integration scenarios which is also
recommended to get the puppet module tested in upstream gates. An example
of how to add a new service into p-o-i is in this review.
4. Adding deployment support in Packstack
If you want to make it easier for RDO users to evaluate a new service, adding
it to Packstack is a good idea.
Packstack is a puppet-based deployment tool used by RDO users to deploy small proof
of concept (PoC) environments to evaluate new services or configurations
before deploying it in their production clouds. If you are interested you can
take a look to these two reviews
which added support for Panko and Magnum in Ocata cycle.
5. Add it to TripleO
TripleO is a powerful
OpenStack management tool able to provision and manage cloud environments
with production-ready features, as high availability, extended security,
etc… Adding support for new services in TripleO will help the users to
adopt it for their cloud deployments. The TripleO composable roles tutorial
can guide you about how to do it.
6. Build containers for new services
Kolla is the upstream
project providing container images and deployment tools to operate OpenStack
clouds using container technologies. Kolla supports building images for
CentOS distro using binary method which uses packages from RDO. Operators using
containers will have it easier it if you add containers for new services.
Follow OpenStack governance policies
RDO methodology and tooling is conceived according to OpenStack upstream
release model, so following policies about release management
is a big help to maintain packages in RDO. It's specially important to create
branches and version tags as defined by the releases team.
Advertise your work to the RDO community
Making potential users aware of availability of new services or other
improvements is a good practice. RDO provides several ways to do this as
sending mails to our mailing lists,
writing a post in the blog, adding
references in our documentation, creating screencast demos, etc… You
can also join the RDO weekly meeting
to let us know about your work.
Join RDO Test Days
RDO organizes test days at several
milestones during each OpenStack release cycle. Although we do Continuous
Integration testing in RDO, it's good to test that it can be deployed
following the instructions in the documentation. You can propose new
services or configurations in the test matrix and add a link to the
documented instructions about how to do it.
A tale of Tempest rpm with Installers by chandankumar
Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.
Using Software Factory to manage Red Hat OpenStack Platform lifecycle by Maria Bracho, Senior Product Manager OpenStack
by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery Software-Factory Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.
Joe: Hi, I'm Joe Talerico. I work on OpenStack at Red Hat, doing OpenStack performance. In Ocata, we're going to be looking at doing API and dataplane performance and performance CI. In Pike we're looking at doing mix/match workloads of Rally, Shaker,
and perfkit benchmarker, and different styles, different workloads running concurrently. That's what we're looking forward to in Pike.
Rich: How long have you been working on this stuff?
Joe: OpenStack performance, probably right around four years now. I started with doing Spec Cloud development, and Spec Cloud development turned into doing performance work at Red Hat for OpenStack … actually, it was Spec Virt, then Spec Cloud, then performance at OpenStack.
Rich: What kind of things were in Ocata that you find interesting?
Joe: In Ocata … for us … well, in Newton, composable roles, but building upon that, in TripleO, being able to do … breaking out the control plane even further, being able to scale out our deployments to much larger clouds. In Ocata, we're looking to work with CNCF, and do a 500 node deployment, and then put OpenShift on top of that, and find some more potential performance issues, or performance gains, going from Newton to Ocata. We've done this previously with Newton, we're going to redo it with Ocata.
One of the key goals in RDO is to provide a set of well tested and up-to-date
repositories that can be smoothly used by our users:
Operators deploying OpenStack with any of the available tools.
Upstream projects using RDO repos to develop and test their patches, as OpenStack
puppet modules, TripleO or kolla.
To include new patches in RDO packages as soon as possible, in RDO Trunk
we build and publish new packages when commits are merged in upstream repositories.
To ensure the content of theses packages is trustworthy, we run different tests which
helps us to identify any problem introduced by the changes committed.
This post provides an overview of how we test RDO repositories. If you
are interested in collaborate with us in running an improving it, feel free to
let us know in #rdo channel in freenode or rdo-list mailing list.
Promotion pipelines are composed by a set of related CI jobs that are executed
for each supported OpenStack release to test the content of a specific RDO repository.
Currently promotion pipelines are executed in diferent phases:
Define the repository to be tested. RDO Trunk repositories are identified
by a hash based on the upstream commit of the last built package. The content of
these repos doesn't change over time. When a promotion pipeline is launched, it
grabs the latest consistent hash repo and sets it to be tested in following phases.
Build TripleO images.TripleO is
the recommended deployment tool for production usage in RDO and as such, is tested
in RDO CI jobs. Before actually deploying OpenStack using TripleO the required
images are built.
Deploy and test RDO. We run a set of jobs which deploy and test OpenStack
using different installers and scenarios to ensure they behave as expected. Currently,
following deployment tools and configurations are tested:
OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i)
maintains a set of puppet manifest to deploy different OpenStack services
combinations and configurations (scenarios) in a single server using OpenStack
puppet modules and run tempest smoke tests for the deployed services. The
tested services on each scenario can be found in the
for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
Packstack deployment. As part of the upstream testing, packstack defines
three deployment scenarios
to verify the correct behavior of the existing options. Note that tempest smoke tests
are executed also in these jobs. In RDO-CI we leverage those scenarios to test
new packages built in RDO repos.
Repository and images promotion. When all jobs in the previous phase succeed,
the tested repository is considered good and it is promoted so that users can use these
weirdo is the tool we
use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI.
It's composed of a set of ansible roles and playbooks that prepares the environment
and deploy and test the installers using the testing scripts provided by
provides a set of scripts, ansible roles and pre-defined configurations to deploy
an OpenStack cloud using TripleO
in a simple and fully automated way.
ARA is used to store and visualize
the results of ansible playbook runs, making easier to analize and troubleshoot
RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines
in CentOS CI infrastructure where Jenkins
is used as continuous integration server..
Handling issues in RDO CI
An important aspect of running RDO CI is managing properly the errors found in the
jobs included in the promotion pipelines. The root cause of these issues sometimes
is in the OpenStack upstream projects:
Some problems are not catched in devstack-based jobs running in upstream gates.
In some cases, new versions of OpenStack services require changes in the deployment
tools (puppet modules, TripleO, etc…).
One of the contributions of RDO to upstream projects is to increase test coverage of
the projects and help to identify the problems as soon as possible. When we find them
we report it upstream as Launchpad bugs and propose fixes when possible.
Tempest is a set of integration tests to run against OpenStack Cloud.
Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest
to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration,
packstack, and tripleo-quickstart.
And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.
And the story begins from here:
In RDO, we deliver Tempest as an rpm to be consumed by anyone to test their cloud. Till Newton release,
we maintained a fork of Tempest which contains the config_tempest.py script to auto generate tempest.conf
for your cloud and a set of helper scripts to run Tempest tests as well as with some backports for each release. From Ocata, we have changed the source of Tempest rpm from forked Tempest to upstream Tempest by keeping
the old source till Newton in RDO through rdoinfo. We are using rdo-patches branch
to maintain patches backports starting from Ocata release.
With this change, we have moved the config_tempest.py script from the forked Tempest repository to a separate project python-tempestconf
so that it can be used with vanilla Tempest to generate Tempest config automatically.
What have we done to make a happy integration between Tempest rpm and the installers?
Currently, puppet-openstack-integration, packstack, and tripleo-quickstart heavily use RDO packages. So using Tempest rpm with these installers will be the best match.
Before starting the integration, we need to make the initial ground ready. Till Newton release, all these installers are using Tempest from source in their respective CI.
We have started the match making of Tempest rpm with installers.
puppet-openstack-integration and packstack consume puppet-modules. So in order to consume Tempest rpm, first I need to fix the puppet-tempest.
It is a puppet-module to install and configure Tempest and openstack-services Tempest plugins based on the services available from source as well as packages.
So we have fixed puppet-tempest to install Tempest rpm from the package and created a Tempest workspace. In order to use that feature through puppet-tempest module [https://review.openstack.org/#/c/425085/].
you need to add install_from_source => 'false' and tempest_workspace => 'path to tempest workspace' to tempest.pp and it will do the job for you.
Now we are using the same feature in puppet-openstack-integration and packstack.
It is a collection of scripts and manifests for puppet module testing (which powers the openstack-puppet CI).
From Ocata release, we have added a flag TEMPEST_FROM_SOURCE flag in run_tests.sh script.
Just change TEMPEST_FROM_SOURCE to false in the run_test.sh, Tempest is then installed and configured from packages using puppet-tempest.
It is a utility to install OpenStack on CentOS, Red Hat Enterprise Linux or other derivatives in proof of concept (PoC) environments. Till Newton, Tempest is installed and ran by packstack from the upstream source and behind the scenes, puppet-tempest does the job for us. From Ocata, we have replaced this feature by using Tempest RDO package. You can use this feature by running the following command:
It is an ansible based project for setting up TripleO virtual environments.
It uses triple-quickstart-extras where validate-tempest roles exist which is used to install, configure and run Tempest on a tripleo deployment after installation. We have improved the validate-tempest role to use Tempest rpm package for all releases (supported by OpenStack upstream) by keeping the old workflow and as well as using Ocata Tempest rpm and using ostestr for running Tempest tests for all releases and using python-tempestconf to generate tempest.conf through this patch.
So finally the integration of Tempest rpm with installers is finally done and they are happily consumed in different CI and this will help to test and produce more robust OpenStack cloud in RDO as well as catch issues of Tempest with Tempest plugins early.
Thanks to apevec, jpena, amoralej, Haikel, dmsimard, dmellado, tosky, mkopec, arxcruz, sshnaidm, mwhahaha, EmilienM
and many more on #rdo channel for getting this work done in last 2 and half months. It was a great learning experience.