RDO Community News

See also blogs.rdoproject.org

RDO CI promotion pipelines in a nutshell

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:

  • Operators deploying OpenStack with any of the available tools.
  • Upstream projects using RDO repos to develop and test their patches, as OpenStack puppet modules, TripleO or kolla.

To include new patches in RDO packages as soon as possible, in RDO Trunk repos we build and publish new packages when commits are merged in upstream repositories. To ensure the content of theses packages is trustworthy, we run different tests which helps us to identify any problem introduced by the changes committed.

This post provides an overview of how we test RDO repositories. If you are interested in collaborate with us in running an improving it, feel free to let us know in #rdo channel in freenode or rdo-list mailing list.

Promotion Pipelines

Promotion pipelines are composed by a set of related CI jobs that are executed for each supported OpenStack release to test the content of a specific RDO repository. Currently promotion pipelines are executed in diferent phases:

  1. Define the repository to be tested. RDO Trunk repositories are identified by a hash based on the upstream commit of the last built package. The content of these repos doesn't change over time. When a promotion pipeline is launched, it grabs the latest consistent hash repo and sets it to be tested in following phases.

  2. Build TripleO images. TripleO is the recommended deployment tool for production usage in RDO and as such, is tested in RDO CI jobs. Before actually deploying OpenStack using TripleO the required images are built.

  3. Deploy and test RDO. We run a set of jobs which deploy and test OpenStack using different installers and scenarios to ensure they behave as expected. Currently, following deployment tools and configurations are tested:
    • TripleO deployments. Using tripleo-quickstart we deploy two different configurations, minimal and minimal_pacemaker which apply different settings that cover most common options.
    • OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i) maintains a set of puppet manifest to deploy different OpenStack services combinations and configurations (scenarios) in a single server using OpenStack puppet modules and run tempest smoke tests for the deployed services. The tested services on each scenario can be found in the README for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
    • Packstack deployment. As part of the upstream testing, packstack defines three deployment scenarios to verify the correct behavior of the existing options. Note that tempest smoke tests are executed also in these jobs. In RDO-CI we leverage those scenarios to test new packages built in RDO repos.
  4. Repository and images promotion. When all jobs in the previous phase succeed, the tested repository is considered good and it is promoted so that users can use these packages:

Tools used in RDO CI

  • Jobs definitions are managed using Jenkings Job Builder, JJB, via gerrit review workflow in review.rdoproject.org
  • weirdo is the tool we use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI. It's composed of a set of ansible roles and playbooks that prepares the environment and deploy and test the installers using the testing scripts provided by the projects.
  • TripleO Quickstart provides a set of scripts, ansible roles and pre-defined configurations to deploy an OpenStack cloud using TripleO in a simple and fully automated way.
  • ARA is used to store and visualize the results of ansible playbook runs, making easier to analize and troubleshoot them.

Infrastructure

RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines in CentOS CI infrastructure where Jenkins is used as continuous integration server..

Handling issues in RDO CI

An important aspect of running RDO CI is managing properly the errors found in the jobs included in the promotion pipelines. The root cause of these issues sometimes is in the OpenStack upstream projects:

  • Some problems are not catched in devstack-based jobs running in upstream gates.
  • In some cases, new versions of OpenStack services require changes in the deployment tools (puppet modules, TripleO, etc…).

One of the contributions of RDO to upstream projects is to increase test coverage of the projects and help to identify the problems as soon as possible. When we find them we report it upstream as Launchpad bugs and propose fixes when possible.

Every time we find an issue, a new card is added to the TripleO and RDO CI Status Trello board where we track the status and activities carried out to get it fixed.

Status of promotion pipelines

If you are interested in the status of the promotion pipelines in RDO CI you can check:

  • CentOS CI RDO view can be used to see the result and status of the jobs for each OpenStack release.

  • RDO Dashboard shows the overal status of RDO packaging and CI.

More info

View article »

A tale of Tempest rpm with Installers

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

And the story begins from here:

In RDO, we deliver Tempest as an rpm to be consumed by anyone to test their cloud. Till Newton release, we maintained a fork of Tempest which contains the config_tempest.py script to auto generate tempest.conf for your cloud and a set of helper scripts to run Tempest tests as well as with some backports for each release. From Ocata, we have changed the source of Tempest rpm from forked Tempest to upstream Tempest by keeping the old source till Newton in RDO through rdoinfo. We are using rdo-patches branch to maintain patches backports starting from Ocata release.

With this change, we have moved the config_tempest.py script from the forked Tempest repository to a separate project python-tempestconf so that it can be used with vanilla Tempest to generate Tempest config automatically.

What have we done to make a happy integration between Tempest rpm and the installers?

Currently, puppet-openstack-integration, packstack, and tripleo-quickstart heavily use RDO packages. So using Tempest rpm with these installers will be the best match. Before starting the integration, we need to make the initial ground ready. Till Newton release, all these installers are using Tempest from source in their respective CI. We have started the match making of Tempest rpm with installers. puppet-openstack-integration and packstack consume puppet-modules. So in order to consume Tempest rpm, first I need to fix the puppet-tempest.

puppet-tempest

It is a puppet-module to install and configure Tempest and openstack-services Tempest plugins based on the services available from source as well as packages. So we have fixed puppet-tempest to install Tempest rpm from the package and created a Tempest workspace. In order to use that feature through puppet-tempest module [https://review.openstack.org/#/c/425085/]. you need to add install_from_source => 'false' and tempest_workspace => 'path to tempest workspace' to tempest.pp and it will do the job for you. Now we are using the same feature in puppet-openstack-integration and packstack.

puppet-openstack-integration

It is a collection of scripts and manifests for puppet module testing (which powers the openstack-puppet CI). From Ocata release, we have added a flag TEMPEST_FROM_SOURCE flag in run_tests.sh script. Just change TEMPEST_FROM_SOURCE to false in the run_test.sh, Tempest is then installed and configured from packages using puppet-tempest.

packstack

It is a utility to install OpenStack on CentOS, Red Hat Enterprise Linux or other derivatives in proof of concept (PoC) environments. Till Newton, Tempest is installed and ran by packstack from the upstream source and behind the scenes, puppet-tempest does the job for us. From Ocata, we have replaced this feature by using Tempest RDO package. You can use this feature by running the following command:

$ sudo packstack --allinone --config-provision-tempest=y --run-tempest=y

It will perform packstack all in one installation and after that, it will install and configure Tempest and run smoke tests on deployed cloud. We are using the same in RDO CI.

tripleo-quickstart

It is an ansible based project for setting up TripleO virtual environments. It uses triple-quickstart-extras where validate-tempest roles exist which is used to install, configure and run Tempest on a tripleo deployment after installation. We have improved the validate-tempest role to use Tempest rpm package for all releases (supported by OpenStack upstream) by keeping the old workflow and as well as using Ocata Tempest rpm and using ostestr for running Tempest tests for all releases and using python-tempestconf to generate tempest.conf through this patch.

To see in action, Run the following command:

$ wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh
$ bash quickstart.sh --install-deps
$ bash quickstart.sh -R master --tags all $VIRTHOST

So finally the integration of Tempest rpm with installers is finally done and they are happily consumed in different CI and this will help to test and produce more robust OpenStack cloud in RDO as well as catch issues of Tempest with Tempest plugins early.

Thanks to apevec, jpena, amoralej, Haikel, dmsimard, dmellado, tosky, mkopec, arxcruz, sshnaidm, mwhahaha, EmilienM and many more on #rdo channel for getting this work done in last 2 and half months. It was a great learning experience.

View article »

Let rdopkg manage your RPM package

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.

rdopkg is a little opinionated, but when you setup your environment right, most packaging tasks are reduced to a single rdopkg command:

  • Introduce/remove patches: rdopkg patch
  • Rebase patches on a new upstream version: rdopkg new-version

rdopkg builds upon the concept distgit which simply refers to maintaining RPM package source files in a git repository. For example, all Fedora and CentOS packages are maintained in distgit.

Using Version Control System for packaging is great, so rdopkg extends this by requiring patches to be also maintained using git as opposed to storing them as simple .patch files in distgit.

For this purpose, rdopkg introduces concept of patches branch which is simply a git branch containing… yeah, patches. Specifically, patches branch contains upstream git tree with optional downstream patches on top.

In other words, patches are maintained as git commits. The same way they are managed upstream. To introduce new patch to a package, just git cherry-pick it to patches branch and let rdopkg patch do the rest. Patch files are generated from git, .spec file is changed automatically.

When new version is released upstream, rdopkg can rebase patches branch on new version and update distgit automatically. Instead of hoping some .patch files apply on ever changing tarball, git can be used to rebase the patches which brings many advantages like automatically dropping patches already included in new release and more.

Requirements

upstream repo requirements

You project needs to be maintained in a git repository and use Semantic Versioning tags for its releases, such as 1.2.3 or v1.2.3.

distgit

Fedora packages already live in distgit repos which packagers can get by

fedpkg clone package

If your package doesn't have a distgit yet, simply create a git repository and put all the files from .src.rpm SOURCES in there.

el7 distgit branch is used in following example.

patches branch

Finally, you need a repository to hold your patches branches. This can be the same repo as distgit or a different one. You can use various processes to manage your patches branches, simplest one being packager maintaining them manually like he would with .patch files.

el7-patches patches branch is used in following example.

install rdopkg

rdopkg page contains installation instructions. Most likely, this will do:

dnf install rdopkg

Initial setup

Start with cloning distgit:

git clone $DISTGIT
cd $PACKAGE

Add patches remote which contains/is going to contain patches branches (unless it's the same as origin):

git remote add -f patches $PATCHES_BRANCH_GIT

While optional, it's strongly recommended to also add upstream remote with project upstream to allow easy initial patches branch setup, cherry-picking and some extra rdopkg automagic detection:

git remote add -f upstream $UPSTREAM_GIT

Clean .spec

In this example we'll assume we'll building a package for EL 7 distribution and will use el7 branch for our distgit:

git checkout el7

Clean the .spec file. Replace hardcoded version strings (especially in URL) with macros so that .spec is current when Version changes. Check rdopkg pkgenv to see what rdopkg thinks about your package:

editor foo.spec
rdopkg pkgenv
git commit -a

Prepare patches branch

By convention, rdopkg expects $BRANCH distgit branch to have appropriate $BRANCH-patches patches branch.

Thus, for our el7 distgit, we need to create el7-patches branch.

First, see current Version::

rdopkg pkgenv | grep Version

Assume our package is at Version: 1.2.3.

upstream remote should contain associated 1.2.3 version tag which should correspond to 1.2.3 release tarball so let's use that as a base for our new patches branch:

git checkout -b el7-patches 1.2.3

Finally, if you have some .patch files in your el7 distgit branch, you need to apply them on top el7-patches now.

Some patches might be present in upstream remote (like backports) so you can git cherry-pick them.

Once happy with your patches on top of 1.2.3, push your patches branch into the patches remote:

git push patches el7-patches

Update distgit

With el7-patches patches branch in order, try updating your distgit:

git checkout el7
rdopkg patch

If this fails, you can try lower-level rdopkg update-patches which skips certain magics but isn't reccommended normal usage.

Once this succeeds, inspect newly created commit that updated the .spec file and .patch files from el7-patches patches branch.

Ready to rdopkg

After this, you should be able to manage your package using rdopkg.

Please note that both rdopkg patch and rdopkg new-version will reset local el7-patches to remote patches/el7-patches unless you supply -l/--local-patches option.

To introduce/remove patches, simply modify remote el7-patches patches branch and let rdopkg patch do the rest:

rdopkg patch

To update your package to new upstream version including patches rebase:

git fetch --all
rdopkg new-version

Finally, if you just want to fix your .spec file without touching patches:

rdopkg fix
# edit .spec
rdopkg -c

More information

List all rdopkg actions with:

rdopkg -h

Most rdopkg actions have some handy options, see them with

rdopkg $ACTION -h

Read the friendly manual:

man rdopkg

You can also read RDO packaging guide which contains some examples of rdopkg usage in RDO.

Happy packaging!

View article »

Blogs, week of March 6th

There's lots of great blog posts this week from the RDO community.

RDO Ocata Release Behind The Scenes by Haïkel Guémar

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work. Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.

Read more at http://tm3.org/ec

Developing Mistral workflows for TripleO by Steve Hardy

During the newton/ocata development cycles, TripleO made changes to the architecture so we make use of Mistral (the OpenStack workflow API project) to drive workflows required to deploy your OpenStack cloud.

Read more at http://tm3.org/ed

Use a CI/CD workflow to manage TripleO life cycle by Nicolas Hicher

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

Read more at http://tm3.org/ee

Red Hat Knows OpenStack by Rich Bowen

Clips of some of my interviews from the OpenStack PTG last week. Many more to come.

Read more at http://tm3.org/ef

OpenStack Pike PTG: TripleO, TripleO UI - Some highlights by jpichon

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.

Read more at http://tm3.org/eg

OpenStack PTG, trip report by rbowen

last week, I attended the OpenStack PTG (Project Teams Gathering) in Atlanta.

Read more at http://tm3.org/eh

View article »

Use a CI/CD workflow to manage TripleO life cycle

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

The goal is to use Software-Factory to submit reviews to create or update a TripleO deployment. The review process ensure peers validation before executing the deployment or update command. The deployment will be done within Openstack tenants. We will split each roles in a different tenant to ensure network isolation between services.

Tools

Software Factory

Software Factory (also called SF) is a collection of services that provides a powerful platform to build software.

The main advantages of using Software Factory to manage the deployment are:

  • Cross-project gating system (through user defined jobs).
  • Code-review system to ensure peer validation before changes are merged.
  • Reproducible test environment with ephemeral slave

Python-tripleo-helper

Python-tripleo-helper is a library provides a complete Python API to drive an OpenStack deployment (TripleO). It allow to:

  • Deploy OpenStack with TripleO within an OpenStack tenant
  • Can deploy a virtual OpenStack using the baremetal workflow with IPMI commands.

TripleO

Tripleo is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundations.

full article

View article »

Blogs, week of Feb 27th

Here's what RDO enthusiasts have been blogging about in the last couple of weeks. I encourage you to particularly read Julie' excellent writeup of the OpenStack Pike PTG last week in Atlanta. And have a look at my video series from the PTG for other engineers' perspectives.

OpenStack Pike PTG: OpenStack Client - Tips and background for interested contributors by jpichon

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.

Read more at http://tm3.org/eb

SDN with Red Hat OpenStack Platform: OpenDaylight Integration by Nir Yechiel, Senior Technical Product Manager at Red Hat

OpenDaylight is an open source project under the Linux Foundation with the goal of furthering the adoption and innovation of software-defined networking (SDN) through the creation of a common industry supported platform. Red Hat is a Platinum Founding member of OpenDaylight and part of the community alongside a list of participants that covers the gamut  from individual contributors to large network companies, making it a powerful and innovative engine that can cover many use-cases.

Read more at http://tm3.org/e8

Installing TripleO Quickstart by Carlos Camacho

This is a brief recipe about how to manually install TripleO Quickstart in a remote 32GB RAM box and not dying trying it.

Read more at http://tm3.org/ea

RDO Ocata released by jpena

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ocata for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ocata is the 15th release from the OpenStack project, which is the work of more than 2500 contributors from around the world (source).

Read more at http://tm3.org/e9

OpenStack Project Team Gathering, Atlanta, 2017 by Rich Bowen

Over the last several years, OpenStack has conducted OpenStack Summit twice a year. One of these occurs in North America, and the other one alternates between Europe and Asia/Pacific.

Read more at http://tm3.org/e0

Setting up a nested KVM guest for developing & testing PCI device assignment with NUMA by Daniel Berrange

Over the past few years OpenStack Nova project has gained support for managing VM usage of NUMA, huge pages and PCI device assignment. One of the more challenging aspects of this is availability of hardware to develop and test against. In the ideal world it would be possible to emulate everything we need using KVM, enabling developers / test infrastructure to exercise the code without needing access to bare metal hardware supporting these features.

Read more at http://tm3.org/e1

ANNOUNCE: libosinfo 1.0.0 release by Daniel Berrange

NB, this blog post was intended to be published back in November last year, but got forgotten in draft stage. Publishing now in case anyone missed the release…

Read more at http://tm3.org/e2

Containerizing Databases with Kubernetes and Stateful Sets by Andrew Beekhof

The canonical example for Stateful Sets with a replicated application in Kubernetes is a database.

Read more at http://tm3.org/e3

Announcing the ARA 0.11 release by dmsimard

We’re on the road to version 1.0.0 and we’re getting closer: introducing the release of version 0.11!

Read more at http://tm3.org/e4

View article »

RDO Ocata Release Behind The Scenes

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work. Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.

Our release process does not start when upstream announces GA or even a milestone, no it starts from the very beginning of upstream cycle.

Trunk chasing

We have been using DLRN to track upstream changes and build continuously OpenStack as a RPM distribution. Then our CI hosted on CentOS community CI runs multiple jobs on DLRN snapshots. We use the WeIRDO framework to run the same jobs as upstream CI on our packages. This allows us to detect early integration issues and get either our packaging or upstream projects fixed. This also includes installers such as OPM, TripleO or PackStack.

We also create Ocata tags in CentOS Community Build System (CBS) in order to build dependencies that are incompatible with currently supported releases.

Branching

We start branching RDO stable release around milestone 3, and have stable builds getting bootstrapped. This includes:

  • registering packages in CBS, I scripted this part for Ocata using rdoinfo database.
  • syncing requirements in packages.
  • branching distgit repositories.
  • building upstream releases in CBS, this part used to be semi-automated using rdopkg tool, Alfredo is consolidating that into a cron job creating reviews.
  • tag builds in -testing repositories, some automation is in preparation.

Trunk chasing continues, but we pay attention in keeping promotions happening more frequently to avoid a gap between tested upstream commits and releases.

GA publication

Since OpenStack does releases slightly ahead of time, we have most of GA releases built in CBS, but some of them comes late. We also trim final GA repositories, use repoclosure utility to check if there's no missing dependencies. Before mass-tagging builds in -release we launch stable promotion CI jobs and if they're green, we publish them.

At this stage, CentOS Core team, creates final GA repositories and sign packages.

For Newton, it took 10 hours between upstream GA announcement and repositories publication, 4 hours up to stable tagging. As for Ocata, all stable builds + CI jobs were finished within 2 hours.

Fun fact, Alan and I were doing the last bits of the Ocata release in the Atlanta PTG hallway and even get to see Doug Hellmann to send the GA announcement live (which started the chronometer for us). So we sprinted to have RDO Ocata GA ready as soon as possible (CI included!). We still have room for improvement but we were the first binary OpenStack distro available!

Thoughts

As of Ocata, there are still areas of improvement:

  • documenting releases process: many steps are still manual or require specific knowledge. During Newton/Ocata releases, we enabled Alfredo to do large chunks of the release preparation work. With post-mortems, this helped us clarifying the process, and prepare to allow more people helping in the release process.
  • dependencies CI: dependencies are a critical factor to release RDO in time. We need to test dependencies against RDO releases, RDO against CentOS updates and ensure that nothing is broken. That's one of our goals for Pike.
  • tag management: tags are used in CBS to determine where builds are to be published. Unlike Fedora, CBS has no automated pipeline to manage updates, so we have to manually tag builds. I'm currently working on having a gerrit-based process to manage tags. The tricky part is how to avoid inconsistencies in repositories (e.g avoid breaking dependencies, accidental untagging etc.)
  • dependencies updates: we want dependencies to remain compatible with Fedora packages, as Fedora is the foundation of next RHEL/CentOS, some of them are maintained in Fedora, others in RDO common, some with basic patches to fix EL7 build issues (not acceptable in Fedora), the rest being forks that we effectively maintain (e.g MariaDB). As a first step, we want to have the last set of packages to be maintained in our gerrit instances to allow maintainers doing builds without any releng support.
  • more contributions! Our effort into automating the release pipeline also serves the goal of empowering more contributors into the release work, so if you're interested, just come and tell us. ;-)

I hope this gave you an overview of how RDO is released and what are our next steps for Pike release.

View article »

RDO Ocata released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ocata for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ocata is the 15th release from the OpenStack project, which is the work of more than 2500 contributors from around the world (source).

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Interesting things in the Ocata release include:

For cloud operators, RDO now provides packages for some new OpenStack Services:

  • Tacker: an ETSI MANO NFV Orchestrator and VNF Manager
  • Congress: an open policy framework for the cloud
  • Vitrage: the OpenStack RCA (Root Cause Analysis) Service
  • Kolla: The Kolla project provides tooling to build production-ready container images for deploying OpenStack clouds

Some other notable additions:

  • novajoin: a dynamic vendordata plugin for the OpenStack nova metadata service to manage automatic host instantiation in an IPA server
  • ironic-ui: a new Horizon plugin to view and manage baremetal servers
  • python-virtualbmc VirtualBMC is a proxy that translates IPMI commands to libvirt calls. This allows projects such as OpenStack Ironic to test IPMI drivers using VMs.
  • python-muranoclient: a client for the Application Catalog service.
  • python-monascaclient: a client for the Monasca monitoring-as-a-service solution.
  • Shaker: the distributed data-plane testing tool built for OpenStack
  • Multi-architecture support: aarch64 builds are now provided through an experimental repository - enable the RDO 'testing' repositories to get started

From a networking perspective, we have added some new Neutron plugins that can help Cloud users and operators to address new use cases and scenarios:

  • networking-bagpipe: a mechanism driver for Neutron ML2 plugin using BGP E-VPNs/IP VPNs as a backend
  • networking-bgpvpn: an API and framework to interconnect BGP/MPLS VPNs to Openstack Neutron networks
  • networking-fujitsu: FUJITSU ML2 plugins/drivers for OpenStack Neutron
  • networking-l2gw: APIs and implementations to support L2 Gateways in Neutron
  • networking-sfc: APIs and implementations to support Service Function Chaining in Neutron

From the Packstack side, we have several improvements:

  • We have added support to install Panko and Magnum
  • Puppet 4 is now supported, and we have updated our manifests to cover the latest changes in the supported projects

Getting Started

There are three ways to get started with RDO.

  • To spin up a proof of concept cloud, quickly, and on limited hardware, try the All-In-One Quickstart. You can run RDO on a single node to get a feel for how it works.
  • For a production deployment of RDO, use the TripleO Quickstart and you'll be running a production cloud in short order.
  • Finally, if you want to try out OpenStack, but don't have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack. (TryStack is not, at this time, running Ocata, although it is running RDO.)

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we're there too, and also Google+.

View article »

OpenStack Project Team Gathering, Atlanta, 2017

Over the last several years, OpenStack has conducted OpenStack Summit twice a year. One of these occurs in North America, and the other one alternates between Europe and Asia/Pacific.

This year, OpenStack Summit in North America is in Boston , and the other one will be in Sydney.

This year, though, the OpenStack Foundation is trying something a little different. Wheras in previous years, a portion of OpenStack Summit was the developers summit, where the next version of OpenStack was planned, this year that's been split off into its own separate event called the PTG - the Project Teams Gathering. That's going to be happening next week in Atlanta.

Throughout the week, I'm going to be interviewing engineers who work on OpenStack. Most of these will be people from Red Hat, but I will also be interviewing people from some other organizations, and posting their thoughts about the Ocata release - what they've been working on, and what they'll be working on in the upcoming Pike release, based on their conversations in the coming week at the PTG.

So, follow this channel over the next couple weeks as I start posting those interviews. It's going to take me a while to edit them after next week, of course. But you'll start seeing some of these appear in my YouTube channel over the coming few days.

Thanks, and I look forward to filling you in on what's happening in upstream OpenStack.

View article »