RDO Community News

See also blogs.rdoproject.org

The journey of a new OpenStack service in RDO

When new contributors join RDO, they ask for recommendations about how to add new services and help RDO users to adopt it. This post is not a official policy document nor a detailed description about how to carry out some activities, but provides some high level recommendations to newcomers based on what I have learned and observed in the last year working in RDO.

Note that you are not required to follow all these steps and even you can have your own ideas about it. If you want to discuss it, let us know your thoughts, we are always open to improvements.

1. Adding the package to RDO

The first step is to add the package(s) to RDO repositories as shown in RDO documentation. This tipically includes the main service package, client library and maybe a package with a plugin for horizon.

In some cases new packages require some general purpose libraries. If they are not in CentOS base channels, RDO imports them from Fedora packages into a dependencies repository. If you need a new dependency which already exists in Fedora, just let us know and we'll import it into the repo. If it doesn't exist, you'll have to add the new package into Fedora following the existing process.

2. Create a puppet module

Although there are multiple deployment tools for OpenStack based on several frameworks, puppet is widely used by different tools or even directly by operators so we recommend to create a puppet module to deploy your new service following the Puppet OpenStack Guide. Once the puppet module is ready, remember to follow the RDO new package process to get it packaged in the repos.

3. Make sure the new service is tested in RDO-CI

As explained in a previous post we run several jobs in RDO CI to validate the content of our repos. Most of the times the first way to get it tested is by adding the new service to one of the puppet-openstack-integration scenarios which is also recommended to get the puppet module tested in upstream gates. An example of how to add a new service into p-o-i is in this review.

4. Adding deployment support in Packstack

If you want to make it easier for RDO users to evaluate a new service, adding it to Packstack is a good idea. Packstack is a puppet-based deployment tool used by RDO users to deploy small proof of concept (PoC) environments to evaluate new services or configurations before deploying it in their production clouds. If you are interested you can take a look to these two reviews which added support for Panko and Magnum in Ocata cycle.

5. Add it to TripleO

TripleO is a powerful OpenStack management tool able to provision and manage cloud environments with production-ready features, as high availability, extended security, etc… Adding support for new services in TripleO will help the users to adopt it for their cloud deployments. The TripleO composable roles tutorial can guide you about how to do it.

6. Build containers for new services

Kolla is the upstream project providing container images and deployment tools to operate OpenStack clouds using container technologies. Kolla supports building images for CentOS distro using binary method which uses packages from RDO. Operators using containers will have it easier it if you add containers for new services.

Other recomendations

Follow OpenStack governance policies

RDO methodology and tooling is conceived according to OpenStack upstream release model, so following policies about release management and requirements is a big help to maintain packages in RDO. It's specially important to create branches and version tags as defined by the releases team.

Making potential users aware of availability of new services or other improvements is a good practice. RDO provides several ways to do this as sending mails to our mailing lists, writing a post in the blog, adding references in our documentation, creating screencast demos, etc… You can also join the RDO weekly meeting to let us know about your work.

Join RDO Test Days

RDO organizes test days at several milestones during each OpenStack release cycle. Although we do Continuous Integration testing in RDO, it's good to test that it can be deployed following the instructions in the documentation. You can propose new services or configurations in the test matrix and add a link to the documented instructions about how to do it.

Upstream documentation

RDO relies on upstream OpenStack Installation Guide for deployment instructions. Keeping it up to date is recommended.

View article »

Blog posts, week of March 20

Here's what the RDO community has been blogging about in the last week.

Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta by Rich Bowen

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.

Read more at http://rdoproject.org/blog/2017/03/joe-talerico-and-openstack-performance-at-the-openstack-ptg-in-atlanta/

RDO CI promotion pipelines in a nutshell by amoralej

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:

Read more at http://rdoproject.org/blog/2017/03/rdo-ci-in-a-nutshell/

A tale of Tempest rpm with Installers by chandankumar

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

Read more at http://rdoproject.org/blog/2017/03/a-tale-of-tempest-rpm-with-installers/

An even better Ansible reporting interface with ARA 0.12 by dmsimard

Not even a month ago, I announced the release of ARA 0.11 with a bunch of new features and improvements.

Read more at https://dmsimard.com/2017/03/12/an-even-better-ansible-reporting-interface-with-ara-0-12/

Let rdopkg manage your RPM package by

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.

Read more at http://rdoproject.org/blog/2017/03/let-rdopkg-manage-your-RPM-package/

Using Software Factory to manage Red Hat OpenStack Platform lifecycle by Maria Bracho, Senior Product Manager OpenStack

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery Software-Factory Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.

Read more at http://redhatstackblog.redhat.com/2017/03/08/using-software-factory-to-manage-red-hat-openstack-platform-lifecycle/

View article »

Joe Talerico and OpenStack Performance at the OpenStack PTG in Atlanta

Last month at the OpenStack PTG in Atlanta, Joe Talerico spoke about his work on OpenStack Performance in the Ocata cycle.

Subscribe to our YouTube channel for more videos like this.

Joe: Hi, I'm Joe Talerico. I work on OpenStack at Red Hat, doing OpenStack performance. In Ocata, we're going to be looking at doing API and dataplane performance and performance CI. In Pike we're looking at doing mix/match workloads of Rally, Shaker, and perfkit benchmarker, and different styles, different workloads running concurrently. That's what we're looking forward to in Pike.

Rich: How long have you been working on this stuff?

Joe: OpenStack performance, probably right around four years now. I started with doing Spec Cloud development, and Spec Cloud development turned into doing performance work at Red Hat for OpenStack … actually, it was Spec Virt, then Spec Cloud, then performance at OpenStack.

Rich: What kind of things were in Ocata that you find interesting?

Joe: In Ocata … for us … well, in Newton, composable roles, but building upon that, in TripleO, being able to do … breaking out the control plane even further, being able to scale out our deployments to much larger clouds. In Ocata, we're looking to work with CNCF, and do a 500 node deployment, and then put OpenShift on top of that, and find some more potential performance issues, or performance gains, going from Newton to Ocata. We've done this previously with Newton, we're going to redo it with Ocata.

View article »

RDO CI promotion pipelines in a nutshell

One of the key goals in RDO is to provide a set of well tested and up-to-date repositories that can be smoothly used by our users:

  • Operators deploying OpenStack with any of the available tools.
  • Upstream projects using RDO repos to develop and test their patches, as OpenStack puppet modules, TripleO or kolla.

To include new patches in RDO packages as soon as possible, in RDO Trunk repos we build and publish new packages when commits are merged in upstream repositories. To ensure the content of theses packages is trustworthy, we run different tests which helps us to identify any problem introduced by the changes committed.

This post provides an overview of how we test RDO repositories. If you are interested in collaborate with us in running an improving it, feel free to let us know in #rdo channel in freenode or rdo-list mailing list.

Promotion Pipelines

Promotion pipelines are composed by a set of related CI jobs that are executed for each supported OpenStack release to test the content of a specific RDO repository. Currently promotion pipelines are executed in diferent phases:

  1. Define the repository to be tested. RDO Trunk repositories are identified by a hash based on the upstream commit of the last built package. The content of these repos doesn't change over time. When a promotion pipeline is launched, it grabs the latest consistent hash repo and sets it to be tested in following phases.

  2. Build TripleO images. TripleO is the recommended deployment tool for production usage in RDO and as such, is tested in RDO CI jobs. Before actually deploying OpenStack using TripleO the required images are built.

  3. Deploy and test RDO. We run a set of jobs which deploy and test OpenStack using different installers and scenarios to ensure they behave as expected. Currently, following deployment tools and configurations are tested:
    • TripleO deployments. Using tripleo-quickstart we deploy two different configurations, minimal and minimal_pacemaker which apply different settings that cover most common options.
    • OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i) maintains a set of puppet manifest to deploy different OpenStack services combinations and configurations (scenarios) in a single server using OpenStack puppet modules and run tempest smoke tests for the deployed services. The tested services on each scenario can be found in the README for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
    • Packstack deployment. As part of the upstream testing, packstack defines three deployment scenarios to verify the correct behavior of the existing options. Note that tempest smoke tests are executed also in these jobs. In RDO-CI we leverage those scenarios to test new packages built in RDO repos.
  4. Repository and images promotion. When all jobs in the previous phase succeed, the tested repository is considered good and it is promoted so that users can use these packages:

Tools used in RDO CI

  • Jobs definitions are managed using Jenkings Job Builder, JJB, via gerrit review workflow in review.rdoproject.org
  • weirdo is the tool we use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI. It's composed of a set of ansible roles and playbooks that prepares the environment and deploy and test the installers using the testing scripts provided by the projects.
  • TripleO Quickstart provides a set of scripts, ansible roles and pre-defined configurations to deploy an OpenStack cloud using TripleO in a simple and fully automated way.
  • ARA is used to store and visualize the results of ansible playbook runs, making easier to analize and troubleshoot them.

Infrastructure

RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines in CentOS CI infrastructure where Jenkins is used as continuous integration server..

Handling issues in RDO CI

An important aspect of running RDO CI is managing properly the errors found in the jobs included in the promotion pipelines. The root cause of these issues sometimes is in the OpenStack upstream projects:

  • Some problems are not catched in devstack-based jobs running in upstream gates.
  • In some cases, new versions of OpenStack services require changes in the deployment tools (puppet modules, TripleO, etc…).

One of the contributions of RDO to upstream projects is to increase test coverage of the projects and help to identify the problems as soon as possible. When we find them we report it upstream as Launchpad bugs and propose fixes when possible.

Every time we find an issue, a new card is added to the TripleO and RDO CI Status Trello board where we track the status and activities carried out to get it fixed.

Status of promotion pipelines

If you are interested in the status of the promotion pipelines in RDO CI you can check:

  • CentOS CI RDO view can be used to see the result and status of the jobs for each OpenStack release.

  • RDO Dashboard shows the overal status of RDO packaging and CI.

More info

View article »

A tale of Tempest rpm with Installers

Tempest is a set of integration tests to run against OpenStack Cloud. Delivering robust and working OpenStack cloud is always challenging. To make sure what we deliver in RDO is rock-solid, we use Tempest to perform a set of API and scenario tests against a running cloud using different installers like puppet-openstack-integration, packstack, and tripleo-quickstart. And, it is the story of how we integrated RDO Tempest RPM package with installers so it can be consumed by various CI rather than using raw upstream sources.

And the story begins from here:

In RDO, we deliver Tempest as an rpm to be consumed by anyone to test their cloud. Till Newton release, we maintained a fork of Tempest which contains the config_tempest.py script to auto generate tempest.conf for your cloud and a set of helper scripts to run Tempest tests as well as with some backports for each release. From Ocata, we have changed the source of Tempest rpm from forked Tempest to upstream Tempest by keeping the old source till Newton in RDO through rdoinfo. We are using rdo-patches branch to maintain patches backports starting from Ocata release.

With this change, we have moved the config_tempest.py script from the forked Tempest repository to a separate project python-tempestconf so that it can be used with vanilla Tempest to generate Tempest config automatically.

What have we done to make a happy integration between Tempest rpm and the installers?

Currently, puppet-openstack-integration, packstack, and tripleo-quickstart heavily use RDO packages. So using Tempest rpm with these installers will be the best match. Before starting the integration, we need to make the initial ground ready. Till Newton release, all these installers are using Tempest from source in their respective CI. We have started the match making of Tempest rpm with installers. puppet-openstack-integration and packstack consume puppet-modules. So in order to consume Tempest rpm, first I need to fix the puppet-tempest.

puppet-tempest

It is a puppet-module to install and configure Tempest and openstack-services Tempest plugins based on the services available from source as well as packages. So we have fixed puppet-tempest to install Tempest rpm from the package and created a Tempest workspace. In order to use that feature through puppet-tempest module [https://review.openstack.org/#/c/425085/]. you need to add install_from_source => 'false' and tempest_workspace => 'path to tempest workspace' to tempest.pp and it will do the job for you. Now we are using the same feature in puppet-openstack-integration and packstack.

puppet-openstack-integration

It is a collection of scripts and manifests for puppet module testing (which powers the openstack-puppet CI). From Ocata release, we have added a flag TEMPEST_FROM_SOURCE flag in run_tests.sh script. Just change TEMPEST_FROM_SOURCE to false in the run_test.sh, Tempest is then installed and configured from packages using puppet-tempest.

packstack

It is a utility to install OpenStack on CentOS, Red Hat Enterprise Linux or other derivatives in proof of concept (PoC) environments. Till Newton, Tempest is installed and ran by packstack from the upstream source and behind the scenes, puppet-tempest does the job for us. From Ocata, we have replaced this feature by using Tempest RDO package. You can use this feature by running the following command:

$ sudo packstack --allinone --config-provision-tempest=y --run-tempest=y

It will perform packstack all in one installation and after that, it will install and configure Tempest and run smoke tests on deployed cloud. We are using the same in RDO CI.

tripleo-quickstart

It is an ansible based project for setting up TripleO virtual environments. It uses triple-quickstart-extras where validate-tempest roles exist which is used to install, configure and run Tempest on a tripleo deployment after installation. We have improved the validate-tempest role to use Tempest rpm package for all releases (supported by OpenStack upstream) by keeping the old workflow and as well as using Ocata Tempest rpm and using ostestr for running Tempest tests for all releases and using python-tempestconf to generate tempest.conf through this patch.

To see in action, Run the following command:

$ wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh
$ bash quickstart.sh --install-deps
$ bash quickstart.sh -R master --tags all $VIRTHOST

So finally the integration of Tempest rpm with installers is finally done and they are happily consumed in different CI and this will help to test and produce more robust OpenStack cloud in RDO as well as catch issues of Tempest with Tempest plugins early.

Thanks to apevec, jpena, amoralej, Haikel, dmsimard, dmellado, tosky, mkopec, arxcruz, sshnaidm, mwhahaha, EmilienM and many more on #rdo channel for getting this work done in last 2 and half months. It was a great learning experience.

View article »

Let rdopkg manage your RPM package

rdopkg is a RPM packaging automation tool which was written to efortlessly keep packages in sync with (fast moving) upstream.

rdopkg is a little opinionated, but when you setup your environment right, most packaging tasks are reduced to a single rdopkg command:

  • Introduce/remove patches: rdopkg patch
  • Rebase patches on a new upstream version: rdopkg new-version

rdopkg builds upon the concept distgit which simply refers to maintaining RPM package source files in a git repository. For example, all Fedora and CentOS packages are maintained in distgit.

Using Version Control System for packaging is great, so rdopkg extends this by requiring patches to be also maintained using git as opposed to storing them as simple .patch files in distgit.

For this purpose, rdopkg introduces concept of patches branch which is simply a git branch containing… yeah, patches. Specifically, patches branch contains upstream git tree with optional downstream patches on top.

In other words, patches are maintained as git commits. The same way they are managed upstream. To introduce new patch to a package, just git cherry-pick it to patches branch and let rdopkg patch do the rest. Patch files are generated from git, .spec file is changed automatically.

When new version is released upstream, rdopkg can rebase patches branch on new version and update distgit automatically. Instead of hoping some .patch files apply on ever changing tarball, git can be used to rebase the patches which brings many advantages like automatically dropping patches already included in new release and more.

Requirements

upstream repo requirements

You project needs to be maintained in a git repository and use Semantic Versioning tags for its releases, such as 1.2.3 or v1.2.3.

distgit

Fedora packages already live in distgit repos which packagers can get by

fedpkg clone package

If your package doesn't have a distgit yet, simply create a git repository and put all the files from .src.rpm SOURCES in there.

el7 distgit branch is used in following example.

patches branch

Finally, you need a repository to hold your patches branches. This can be the same repo as distgit or a different one. You can use various processes to manage your patches branches, simplest one being packager maintaining them manually like he would with .patch files.

el7-patches patches branch is used in following example.

install rdopkg

rdopkg page contains installation instructions. Most likely, this will do:

dnf copr enable jruzicka/rdopkg
dnf install rdopkg

Initial setup

Start with cloning distgit:

git clone $DISTGIT
cd $PACKAGE

Add patches remote which contains/is going to contain patches branches (unless it's the same as origin):

git remote add -f patches $PATCHES_BRANCH_GIT

While optional, it's strongly recommended to also add upstream remote with project upstream to allow easy initial patches branch setup, cherry-picking and some extra rdopkg automagic detection:

git remote add -f upstream $UPSTREAM_GIT

Clean .spec

In this example we'll assume we'll building a package for EL 7 distribution and will use el7 branch for our distgit:

git checkout el7

Clean the .spec file. Replace hardcoded version strings (especially in URL) with macros so that .spec is current when Version changes. Check rdopkg pkgenv to see what rdopkg thinks about your package:

editor foo.spec
rdopkg pkgenv
git commit -a

Prepare patches branch

By convention, rdopkg expects $BRANCH distgit branch to have appropriate $BRANCH-patches patches branch.

Thus, for our el7 distgit, we need to create el7-patches branch.

First, see current Version::

rdopkg pkgenv | grep Version

Assume our package is at Version: 1.2.3.

upstream remote should contain associated 1.2.3 version tag which should correspond to 1.2.3 release tarball so let's use that as a base for our new patches branch:

git checkout -b el7-patches 1.2.3

Finally, if you have some .patch files in your el7 distgit branch, you need to apply them on top el7-patches now.

Some patches might be present in upstream remote (like backports) so you can git cherry-pick them.

Once happy with your patches on top of 1.2.3, push your patches branch into the patches remote:

git push patches el7-patches

Update distgit

With el7-patches patches branch in order, try updating your distgit:

git checkout el7
rdopkg patch

If this fails, you can try lower-level rdopkg update-patches which skips certain magics but isn't reccommended normal usage.

Once this succeeds, inspect newly created commit that updated the .spec file and .patch files from el7-patches patches branch.

Ready to rdopkg

After this, you should be able to manage your package using rdopkg.

Please note that both rdopkg patch and rdopkg new-version will reset local el7-patches to remote patches/el7-patches unless you supply -l/--local-patches option.

To introduce/remove patches, simply modify remote el7-patches patches branch and let rdopkg patch do the rest:

rdopkg patch

To update your package to new upstream version including patches rebase:

git fetch --all
rdopkg new-version

Finally, if you just want to fix your .spec file without touching patches:

rdopkg fix
# edit .spec
rdopkg -c

More information

List all rdopkg actions with:

rdopkg -h

Most rdopkg actions have some handy options, see them with

rdopkg $ACTION -h

Read the friendly manual:

man rdopkg

You can also read RDO packaging guide which contains some examples of rdopkg usage in RDO.

Happy packaging!

View article »

Blogs, week of March 6th

There's lots of great blog posts this week from the RDO community.

RDO Ocata Release Behind The Scenes by Haïkel Guémar

I have been involved in 6 GA releases of RDO (From Juno to Ocata), and I wanted to share a glimpse of the preparation work. Since Juno, our process has tremendously evolved: we refocused RDO on EL7, joined the CentOS Cloud SIG, moved to Software Factory.

Read more at http://tm3.org/ec

Developing Mistral workflows for TripleO by Steve Hardy

During the newton/ocata development cycles, TripleO made changes to the architecture so we make use of Mistral (the OpenStack workflow API project) to drive workflows required to deploy your OpenStack cloud.

Read more at http://tm3.org/ed

Use a CI/CD workflow to manage TripleO life cycle by Nicolas Hicher

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

Read more at http://tm3.org/ee

Red Hat Knows OpenStack by Rich Bowen

Clips of some of my interviews from the OpenStack PTG last week. Many more to come.

Read more at http://tm3.org/ef

OpenStack Pike PTG: TripleO, TripleO UI - Some highlights by jpichon

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.

Read more at http://tm3.org/eg

OpenStack PTG, trip report by rbowen

last week, I attended the OpenStack PTG (Project Teams Gathering) in Atlanta.

Read more at http://tm3.org/eh

View article »

Use a CI/CD workflow to manage TripleO life cycle

In this post, I will present how to use a CI/CD workflow to manage TripleO deployment life cycle within an OpenStack tenant.

The goal is to use Software-Factory to submit reviews to create or update a TripleO deployment. The review process ensure peers validation before executing the deployment or update command. The deployment will be done within Openstack tenants. We will split each roles in a different tenant to ensure network isolation between services.

Tools

Software Factory

Software Factory (also called SF) is a collection of services that provides a powerful platform to build software.

The main advantages of using Software Factory to manage the deployment are:

  • Cross-project gating system (through user defined jobs).
  • Code-review system to ensure peer validation before changes are merged.
  • Reproducible test environment with ephemeral slave

Python-tripleo-helper

Python-tripleo-helper is a library provides a complete Python API to drive an OpenStack deployment (TripleO). It allow to:

  • Deploy OpenStack with TripleO within an OpenStack tenant
  • Can deploy a virtual OpenStack using the baremetal workflow with IPMI commands.

TripleO

Tripleo is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundations.

full article

View article »

Blogs, week of Feb 27th

Here's what RDO enthusiasts have been blogging about in the last couple of weeks. I encourage you to particularly read Julie' excellent writeup of the OpenStack Pike PTG last week in Atlanta. And have a look at my video series from the PTG for other engineers' perspectives.

OpenStack Pike PTG: OpenStack Client - Tips and background for interested contributors by jpichon

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.

Read more at http://tm3.org/eb

SDN with Red Hat OpenStack Platform: OpenDaylight Integration by Nir Yechiel, Senior Technical Product Manager at Red Hat

OpenDaylight is an open source project under the Linux Foundation with the goal of furthering the adoption and innovation of software-defined networking (SDN) through the creation of a common industry supported platform. Red Hat is a Platinum Founding member of OpenDaylight and part of the community alongside a list of participants that covers the gamut  from individual contributors to large network companies, making it a powerful and innovative engine that can cover many use-cases.

Read more at http://tm3.org/e8

Installing TripleO Quickstart by Carlos Camacho

This is a brief recipe about how to manually install TripleO Quickstart in a remote 32GB RAM box and not dying trying it.

Read more at http://tm3.org/ea

RDO Ocata released by jpena

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ocata for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ocata is the 15th release from the OpenStack project, which is the work of more than 2500 contributors from around the world (source).

Read more at http://tm3.org/e9

OpenStack Project Team Gathering, Atlanta, 2017 by Rich Bowen

Over the last several years, OpenStack has conducted OpenStack Summit twice a year. One of these occurs in North America, and the other one alternates between Europe and Asia/Pacific.

Read more at http://tm3.org/e0

Setting up a nested KVM guest for developing & testing PCI device assignment with NUMA by Daniel Berrange

Over the past few years OpenStack Nova project has gained support for managing VM usage of NUMA, huge pages and PCI device assignment. One of the more challenging aspects of this is availability of hardware to develop and test against. In the ideal world it would be possible to emulate everything we need using KVM, enabling developers / test infrastructure to exercise the code without needing access to bare metal hardware supporting these features.

Read more at http://tm3.org/e1

ANNOUNCE: libosinfo 1.0.0 release by Daniel Berrange

NB, this blog post was intended to be published back in November last year, but got forgotten in draft stage. Publishing now in case anyone missed the release…

Read more at http://tm3.org/e2

Containerizing Databases with Kubernetes and Stateful Sets by Andrew Beekhof

The canonical example for Stateful Sets with a replicated application in Kubernetes is a database.

Read more at http://tm3.org/e3

Announcing the ARA 0.11 release by dmsimard

We’re on the road to version 1.0.0 and we’re getting closer: introducing the release of version 0.11!

Read more at http://tm3.org/e4

View article »