RDO Community News

See also blogs.rdoproject.org

fedora-review tool for reviewing RDO packages

This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process. Through a bash API the checks can be extended in any programming language and for any programming language.

We can also use it for also reviewing RDO packages on Centos 7/Fedora-24.

Install fedora-review and DLRN

[1.] Install fedora-review and Mock

For Centos 7

Enable epel repos on centos

$ sudo yum -y install epel-release

Download fedora-review el7 build from Fedora Koji

$ sudo yum -y install https://kojipkgs.fedoraproject.org//packages/fedora-review/0.5.3/2.el7/noarch/fedora-review-0.5.3-2.el7.noarch.rpm
$ sudo yum -y install mock

On Fedora 24

$ sudo dnf -y install fedora-review mock

[2.] Add the user you intend to run as to the mock group:

$ sudo usermod -a -G mock $USER
$ newgrp mock
$ newgrp $USER

[3.] Install DLRN:

On Centos 7

$ sudo yum -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

On Fedora 24

$ sudo dnf -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

The below steps works on both distros.

$ virtualenv rdo
$ source .rdo/bin/activate
$ git clone https://github.com/openstack-packages/DLRN.git
$ cd DLRN
$ pip install -r requirements.txt
$ python setup.py develop

[4.] Generate dlrn.cfg (RDO trunk mock config)

$ dlrn --config-file projects.ini --package-name python-keystoneclient
$ ls <path to cloned DLRN repo>/data/dlrn.cfg

[5.] Add dlrn.cfg to mock config.

Add mock config is in /etc/mock directory.

$ sudo cp <path to cloned DLRN repo>/data/dlrn.cfg /etc/mock
$ ls /etc/mock/dlrn.cfg

Now, everything is set, we are now ready to review any RDO package reviews using fedora-review.

Run Fedora-review tool

$ fedora-review -b <RH bug number for RDO Package Review> -m <mock config to use>

Let's review 'python-osc-lib' using dlrn.cfg.

$ fedora-review -b 1346412 -m dlrn

Happy Reviewing!

View article »

Recent RDO blogs, August 1, 2016

Just a few blog posts from the RDO community this week:

ControllerExtraConfig and Tripleo Quickstart by Adam Young

Once I have the undercloud deployed, I want to be able to quickly deploy and redeploy overclouds. However, my last attempt to affect change on the overcloud did not modify the Keystone config file the way I intended. Once again, Steve Hardy helped me to understand what I was doing wrong.

… read more at http://tm3.org/85

OPENSTACK 6TH BIRTHDAY, LEXINGTON, KY by Rich Bowen

Yesterday I spent the day at the University of Kentucky at the OpenStack 6th Birthday Meetup. The day was arranged by Cody Bumgardner and Kathryn Wong from the UK College of Engineering.

… read more at http://tm3.org/86

TripleO deep dive session #4 (Puppet modules) by Carlos Camacho

This is the fourth video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/87

View article »

Recent RDO blogs, July 25, 2016

Here's what RDO enthusiasts have been writing about over the past week:

TripleO deep dive session #3 (Overcloud deployment debugging) by Carlos Camacho

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

… read (and watch) more at http://tm3.org/81

How connection tracking in Open vSwitch helps OpenStack performance by Jiri Benc

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

… read more at http://tm3.org/82

Introduction to Red Hat OpenStack Platform Director by Marcos Garcia

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.

… read more at http://tm3.org/83

Cinder Active-Active HA – Newton mid-cycle by Gorka Eguileor

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session.

… read more at http://tm3.org/84

View article »

Recent RDO blogs, July 19, 2016

Here's what RDO enthusiasts have been blogging about in the last week.

OpenStack 2016.1-1 release Haïkel Guémar

The RDO Community is pleased to announce a new release of openstack-utils.

… read more at http://tm3.org/7x

Improving RDO packaging testing coverage by David Simard

DLRN builds packages and generates repositories in which these packages will be hosted. It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

… read more at http://tm3.org/7y

TripleO deep dive session #2 (TripleO Heat Templates by Carlos Camacho

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/7z

How to build new OpenStack packages by Chandan Kumar

Building new OpenStack packages for RDO is always tough. Let's use DLRN to make our life simpler.

… read more at http://tm3.org/7-

OpenStack Swift mid-cycle hackathon summary by cschwede

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

… read more at http://tm3.org/80

View article »

OpenStack Swift mid-cycle hackathon summary

OpenStack Swift mid-cycle hackathon summary

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

There are always way more topics to discuss than time, therefore we collected topics first and everyone voted afterwards. We came up with the following major discussions that are currently most interesting within our community:

  • Hummingbird replication
  • Crypto - what's next
  • Partition power increase
  • High-latency media
  • Container sharding
  • Golang - how to get it accepted in master
  • Policy migration

There were a lot more topics, and I like to highlight a few of them.

H9D aka Hummingbird / Golang

This was a big topic - as expected. It has been shown by Rackspace already that H9D improves the performance of the object servers and replication significantly compared to the current Python implementation. There were also some investigations if it would be possible to improve the speed using PyPy and other improvements; however the major problem is that Python blocks processes on file I/O, no matter if it is async IO or not. Sam wrote a very nice summary about this earlier on [1].

NTT also benchmarked H9D, and showed some impressive numbers as well. Shortly summarized, throughput increased 5-10x depending on parameters like object size and the like. It seems disks are no longer the bottleneck - now the proxy CPU is the new bottleneck. That said, inode cache memory seems to be even more important because with H9D one can do many more disk requests.

Of course there were also discussions about another proposal to accept golang within OpenStack and discussions will continue [2]. My personal view is that the H9D implementation has some major advantages and hopefully (a refactored subset) will be accepted to be merged to master.

Crypto retro & what's next

Swift 2.9.0 has been released the past week and includes the merged crypto branch [3]. Kudos to everyone involved, especially Janie and Alistair! This middleware make it possible for operators to fully encrypt object data on disk.

We did a retro on the work done so far; it has been the third time that we used a feature branch and a final soft-freeze to land a major change within Swift. There are pros and cons for this, but overall it worked pretty well again. It also made sense that reviewers stepped in late in the process, because this added new sights onto the whole work. Soft freezes also enforce more reviewers to contribute to it and get it merged finally.

Swiftstack benchmarked the crypto branch; as expected the throughput decreases somewhat with crypto enabled (especially with small objects), while proxy CPU usage increases. There were some discussions about improving the performance, and it seems the impact from checksumming is significant here.

Next steps to improve the crypto middleware is to work on some external key master implementations (for example using Barbican) as well as key rotation.

Partition power increase

Finally there is a patch ready for review now, that will allow an operator to increase the partition power without downtime for end users [4].

I gave an overview about the implementation, and also showcased a demo how this works. Based on discussions during the last week I spotted some minor eventualities that have been fixed meanwhile, and I hope to get this merged before Barcelona. We somewhat dreamed about a future Swift that might be usable with automatic partition power increase, where an operator needs to think about this much less than today.

Various middlewares

There are some proposed middlewares that are important to their authors, and we discussed quite a few of them. This includes:

  • High-latency media (aka archiving)
  • symlinks
  • notifications
  • versioning

The idea to support high-latency media is to use cold storage (like tape or other public cloud object storage with a possible multi-hour latency) for less frequently accessed data and especially to offer a low-cost long-term archival solution based on Swift [5]. This is somewhat challenging for the upstream community, because most contributors don't have access to large enterprise tape libraries for testing. In the end this middleware needs to be supported by the community, and a stand-alone repository outside of Swift itself might make most sense therefore (similar to the swift3 middleware [6]).

A new proposal to implement true history-based versioning has been proposed earlier on, and some open questions have been talked about. This should land hopefully soon, adding an improved way to versioning compared to today's stack-based versioning [7].

Sending out notifications based on writes to Swift have been discussed earlier on, and thankfully Zaqar now supports temporary signed urls, solving some of the issues we faced earlier on. I'll update my patch shortly [8]. There is also another option to use oslo.messaging. All in all, the whole idea will be to use a best-effort approach - it's simply not possible to guarantee a notification has been delivered successfully without blocking requests.

Container sharding

As of today it's a good idea to avoid billions of objects in a single container in Swift, because writes to that container can get slow then. Matt started working on container sharding sometime ago [9], and iterated once again because he faced new problems with the previous ideas. My impression is that the new idea is getting much closer to something that will eventually be merged, thanks to Matt's persistence on this topic.

Summary

There were a lot more (smaller) topics that have been discussed, but this should give you an overview of the current work going on in the Swift community and the interesting new features that we'll see hopefully soon in Swift itself. Thanks everyone who contributed and participated and special thanks to Richard for organizing the hackathon - it was a great week and I'm looking forward to the next months!

View article »

How to build new OpenStack packages

Building new OpenStack packages for RDO is always tough. Let's use DLRN to make our life simpler.

DLRN is the RDO Continuous Delivery platform that pulls upstream git, rebuild them as RPM using template spec files, and ships them in repositories consumable by CI (e.g upstream puppet/Tripleo/packstack CI).

We can use DLRN to build a new RDO python package before sending them for package review.

Install DLRN

[1.] Install required dependencies for DLRN on Fedora/CentOS system:

$ sudo yum install git createrepo python-virtualenv mock gcc \
              redhat-rpm-config rpmdevtools libffi-devel \
              openssl-devel

[2.] Create a virtualenv and activate it

$ virtualenv dlrn-venv
$ source dlrn-venv/bin/activate

[3.] Clone DLRN git respository from github

$ git clone https://github.com/openstack-packages/DLRN.git

[4.] Install the required python dependencies for DLRN

$ cd DLRN
$ pip install -r requirements.txt

[5.] Install DLRN

$ python setup.py develop

Now your system is ready to use DLRN.

Add the user you intend to run as to the mock group:

$ sudo usermod -a -G mock $USER
$ newgrp mock
$ newgrp $USER

Let us package "congress" OpenStack project for RDO

[1.] create a project "congress-distgit" and initialize the project using git init

$ mkdir congress-distgit
$ cd congress-distgit
$ git init

[2.] create a branch "rpm-master"

$ git checkout -b rpm-master

[3.] Create openstack-congress.spec file using RDO spec template and commit it into rpm-master branch.

$ git add openstack-congress.spec
$ git commit -m "<your commit message>"

Add a package entry in rdoinfo

[1.] Copy rdoinfo directory somewhere locally and make changes there.

$ rdopkg info && cp -r ~/.rdopkg/rdoinfo $SOMEWHERE_LOCAL
$ cd $SOMEWHERE_LOCAL/.rdopkg/rdoinfo

[2.] Edit the rdo.yml file and add package entry in the last

$ vim rdo.yml
- project: congress # project name
  name: openstack-congress # RDO package name
  upstream: git://github.com/openstack/%(project)s # Congress project source code git repository
  master-distgit: <path to project spec file git repo>.git # path to congress-distgit git directory
  maintainers:
  - < maintainer email > # your email address

[3.] save the rdo.yml and run

$ ./verify.py

This will check rdo.yml sanity.

Run DLRN to build openstack-congress package

[1.] Go to DLRN project directory.

[2.] Run the following command to build the package

$ dlrn --config-file projects.ini \
        --info-repo $SOMEWHERE_LOCAL/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name openstack-congress \ --package flag to build openstack-congress
        --head-only \ To build the package using latest commit

It will clone the project code "openstack-congress" and spec under "openstack-congress_distro" folder.

[3.] Once done, you can rebuilding the package by passing the –dev flag.

$ dlrn --config-file projects.ini \
        --info-repo ~/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name <openstack-congress> \ # --package flag to build openstack-congress
        --head-only \ # To build the package using latest commit
        --dev \ # to build the package locally

[4.] Once build is completed, you can find the rpms and srpms in this folder

$ # path to packaged rpms and srpms
$ <path to DLRN>/data/repos/current/

Now grab the rpms and feel free to test it.

View article »

OpenStack 2016.1-1 release

The RDO Community is pleased to announce a new release of openstack-utils.

Changes:

  • Drop now out-of-date openstack-db utility
  • openstack-status: autodetect MySQL variant
  • openstack-status: check LBaaSv2 agent status

Packages are already in RDO repositories.

Download sources on github

Please report issues/RFE on openstack-utils issue tracker

Openstack-utils is a collection of utilities for OpenStack Services on RDO. It includes:

  • openstack-config: manipulate safely openstack configuration files.
  • openstack-status: check your OpenStack services status.
  • openstack-service: control OpenStack services.
View article »

Recent RDO blogs, July 11 2016

Here's what RDO enthusiasts have been blogging about in the last few weeks:

Who is Testing Your Cloud? by Maria Bracho

With test driven development, continuous integration/continuous deployment and devops practices now the norm, most organizations understand the importance of testing their applications.

… read more at http://tm3.org/7k

Clearing the Keystone Environment By Adam Young

If you spend a lot of time switching between different cloud, different users, or even different projects for the same user when working with openstack, you’ve come across the problem where one environment variable from an old sourceing pollutes the current environment. I’ve been hit by that enough times that I wrote a small script to clear the environment.

Read more at http://tm3.org/7l

Connecting from your local machine to the TripleO overcloud horizon dashboard by Carlos Comacho

The goal of this post is to show how to chain multiple ssh tunnels to browse into the horizon dashboard, deployed in a TripleO environment.

… read more at http://tm3.org/7m

Improving the RDO Trunk infrastructure by Javier Pena

Quite often, projects tend to to outgrow their initial expectations, and in those cases we face issues as they grow. This has been the case with our RDO Trunk repositories and the DLRN tool that builds them.

… read more at http://tm3.org/7n

TripleO manual deployment of 'master' branch by Carlos Comacho

This is a brief recipe about how to manually install TripleO in a remote 32GB RAM box.

… read more at http://tm3.org/7o

Launching a Centos VM in Tripleo Overcloud by Adam Young

My Overcloud deploy does not have any VM images associates with it. I want to test launching a VM.

… read more at http://tm3.org/7p

Red Hat Summit in Review by Rich Bowen

Despite my best intentions of blogging every day at Red Hat Summit, time got away from me, as it often does at these events. There’s always 3 things going on, and it’s hard to find a moment between that first cup of coffee, and stumbling into bed at the end of the night.

… read more at http://tm3.org/7q

Openstack & TripleO deployment using Inlunch by Carlos Camacho

Today I’m going to speak about the first Openstack installer I used to deploy TripleO. Inlunch, as its name aims it should make you “Get an Instack environment prepared for you while you head out for lunch.”

… read more at http://tm3.org/7r

Networking sessions in Red Hat Summit 2016 by Nir Yechiel

I recently attended the Red Hat Summit 2016 event that took place at San Francisco, CA, on June 27-30. Red Hat Summit is a great place to interact with customers, partners, and product leads, and learn about Red Hat and the company’s direction.

… read more at http://tm3.org/7s

Merging FreeIPA and Tripleo Undercloud Apache installs by Adam Young

My Experiment yesterday left me with a broken IPA install. I aim to fix that.

… read more at http://tm3.org/7t

Tokens without revocation by Adam Young

PKI tokens in Keystone suffered from many things, most essentially the trials due to the various forms of revocation. I never wanted revocation in the first place. What could we have done differently? It just (I mean moments ago) came to me.

… read more at http://tm3.org/7u

Liveness by Adam Young

The term Liveness here refers to the need to ensure that the data used to make an authorization check is valid at the time of the check.

… read more at http://tm3.org/7v

TripleO Deep dive sessions #1 (Quickstart deployment) by Carlos Camacho

This is the first video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/7w

View article »

Improving the RDO Trunk infrastructure

Quite often, projects tend to to outgrow their initial expectations, and in those cases we face issues as they grow. This has been the case with our RDO Trunk repositories and the DLRN tool that builds them.

I had my first contact with the tool a bit more than a year ago, when I took the task of migrating it from the virtual machine it was running on to a more powerful one. Fast-forward to the present, we have 5 RDO Trunk builders, and the RDO Trunk repos are used by the TripleO, Kolla, Puppet OpenStack and Packstack CIs (maybe more!). This means we need to provide a more performant and resilient infrastructure than we used to have.

Our main issues

Considering our growth, we were facing several issues:

  • Performance was suffering due to the many workers building packages at once. To make things worse, the same system building packages was also serving them, so any CPU or disk contention would affect any consumers of the repository.

  • While we had a high-availability mechanism in place, it was quite rudimentary, and it involved running lsyncd to synchronize all repo contents to a backup server. With the huge amount of small files present in the system, lsyncd only contributed to making things slower.

The solution

With the help from kbsingh and dmsimard, we came out with a new design for the RDO Trunk infrastucture:

RDO Trunk infrastructure

  • A server inside the ci.centos.org infrastructure is used to build packages. That server is not publicly accessible, so we need a way to make our repositories available.

  • The CentOS CDN is used to distribute the repositories that successfully passed CI tests, using several directories in the CentOS Cloud SIG space. That gives us a fast, highly available way to make the repos available, specially for end-users and the RDO Test Days, when we need them to be 100% ready.

  • In addition to that, we still need CI users and packagers to be able to access the individual repositories created by DLRN when a package is built. To accomplish this, another server is providing those via the well known https://trunk.rdoproject.org URL.

We needed a way to have the build server push each repo to the public-facing server, and the old lsyncd-based method was not working well for us. Thus, we had to patch DLRN to add this feature, which seems to be working quite well so far.

Future steps

This architecture still has some single points of failure: the build server can fail, and the same could happen to the public-facing server. In both cases, there are mitigation measures in place to minimize the impact (we can quickly rebuild the systems using our Puppet modules), but still, there is work ahead to do.

Do you want to help us do it?

View article »

Red Hat Summit, day 1

Red Hat Summit 2016

The first day at Red Hat Summit in San Francisco was, as always, very busy, with hundreds of people coming by the Community Central area of the exhibit hall to learn about RDO, as wlel as other community projects including ManageIQ, Ceph, CentOS, and many others.

Due to the diverse nature of the audience, and their many reasons for attending Red Hat Summit, perhaps half of these attendees hadn't heard of RDO, whild most of them were familiar with OpenStack. So, we have many new people who are going to look at RDO as a way of trying ot OpenStack at their organizations.

If you're at Red Hat Summit, do stop by. We have lots of tshirts left, and we also have TripleO Qu ickStart USB thumb drives so that you can try out RDO with minimal work. We're in Community Central, to your right as you enter the front doors of the Partner Pavillion.

View article »