RDO Community News

See also blogs.rdoproject.org

OpsTools on RDO

OpsTools for RDO

CentOS SIG

In the CentOS community there are Special Interest Groups (SIG) that focus on specific issues such as cloud, storage, virtualization, or operational tools (OpsTools). These special interest groups are created to either to create awareness or to tackle the development of that subject with focus. Among the groups there is the Operational tools group (OpsTools) that focus on

  • Performance Monitoring
  • Availability Monitoring
  • Centralized Logging

OpenStack Operational Tools

While the OpsTools are created for the CentOS community, it is also applicable and available for RDO. More information can be found at GitHub.

Centralized Logging

The centralized logging has the following components:

  • A Log Collection Agent (Fluentd)
  • A Log relay/transformer (Fluentd)
  • A Data store (Elasticsearch)
  • An API/Presentation Layer (Kibana)

With the minimum hardware requirement:

  • 8GB of Memory
  • Single Socket Xeon Class CPU
  • 500GB of Disk Space

Detailed instruction to install can be found here.

Availability Monitoring

The Availability Monitoring has the following components:

  • A Monitoring Agent (sensu)
  • A Monitoring Relay/proxy (rabbitmq)
  • A Monitoring Controller/Server (sensu)
  • An API/Presentation Layer (uchiwa)

With the minimum hardware requirement:

  • 4GB of Memory
  • Single Socket Xeon Class CPU
  • 100GB of Disk Space

Detailed instruction to install can be found here.

Performance Monitoring

The Performance Monitoring has the following components:

  • A Collection Agent (collectd)
  • A Collection Aggregator/Relay (graphite)
  • A Data Store (whisperdb)
  • An API/Presentation Layer (grafana)

With the minimum hardware requirement:

  • 4GB of Memory
  • Single Socket Xeon Class CPU
  • 500GB of Disk Space

Detailed instruction to install can be found here.

Ansible playbooks for deploying OpsTools

Besides manually install the OpsTools, there are Ansible roles and playbooks to automate the installation process and instructions can be found here.

View article »

Recent blog posts, July 17

Here's what the RDO community has been blogging about in the last few weeks:

Create a TripleO snapshot before breaking it… by Carlos Camacho

The idea of this post is to show how developers can save some time creating snapshots of their development environments for not deploying it each time it breaks.

Read more at http://anstack.github.io/blog/2017/07/14/snapshots-for-your-tripleo-vms.html

Tuning for Zero Packet Loss in Red Hat OpenStack Platform – Part 1 by m4r1k

For Telcos considering OpenStack, one of the major areas of focus can be around network performance. While the performance discussion may often begin with talk of throughput numbers expressed in Million-packets-per-second (Mpps) values across Gigabit-per-second (Gbps) hardware, it really is only the tip of the performance iceberg.

Read more at http://redhatstackblog.redhat.com/2017/07/11/tuning-for-zero-packet-loss-in-red-hat-openstack-platform-part-1/

Tuning for Zero Packet Loss in Red Hat OpenStack Platform – Part 2 by m4r1k

Ready for more Fast Packets?!

Read more at http://redhatstackblog.redhat.com/2017/07/13/tuning-for-zero-packet-loss-in-red-hat-openstack-platform-part-2/

TripleO Deep Dive: Internationalisation in the UI by jpichon

Yesterday, as part of the TripleO Deep Dives series I gave a short introduction to internationalisation in TripleO UI: the technical aspects of it, as well as a quick overview of how we work with the I18n team. You can catch the recording on BlueJeans or YouTube, and below's a transcript.

Read more at http://www.jpichon.net/blog/2017/07/tripleo-deep-dive-internationalisation-ui/

View article »

Recent blog posts, July 3

Here's what the community is blogging about lately.

OVS-DPDK Parameters: Dealing with multi-NUMA by Kevin Traynor

In Network Function Virtualization, there is a need to scale functions (VNFs) and infrastructure (NFVi) across multiple NUMA nodes in order to maximize resource usage.

Read more at https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/

OpenStack Down Under – OpenStack Days Australia 2017 by August Simonelli, Technical Marketing Manager, Cloud

As OpenStack continues to grow and thrive around the world the OpenStack Foundation continues to bring OpenStack events to all corners of the globe. From community run meetups to more high-profile events like the larger Summits there is probably an OpenStack event going on somewhere near you.

Read more at http://redhatstackblog.redhat.com/2017/06/26/openstack-down-under-openstack-days-australia-2017/

OpenStack versions - Upstream/Downstream by Carlos Camacho

I’m adding this note as I’m prone to forget how upstream and downstream versions are matching.

Read more at http://anstack.github.io/blog/2017/06/27/openstack-versions-upstream-downstream.html

Tom Barron - OpenStack Manila - OpenStack PTG by Rich Bowen

Tom Barron talks about the work on Manila in the Ocata release, at the OpenStack PTG in Atlanta.

Read more at http://rdoproject.org/blog/2017/07/tom-barron-openstack-manila-openstack-ptg/

Victoria Martinez de la Cruz: OpenStack Manila by Rich Bowen

Victoria Martinez de la Cruz talks Manila, Outreachy, at the OpenStack PTG in Atlanta

Read more at http://rdoproject.org/blog/2017/06/victoria-martinez-de-la-cruz-openstack-manila/

Ihar Hrachyshka - What's new in OpenStack Neutron for Ocata by Rich Bowen

Ihar Hrachyshka talks about his work on Neutron in Ocata, and what's coming in Pike.

Read more at http://rdoproject.org/blog/2017/06/ihar-hrachyshka-whats-new-in-openstack-neutron-for-ocata/

Introducing Software Factory - part 1 by Software Factory Team

Introducing Software Factory Software Factory is an open source, software development forge with an emphasis on collaboration and ensuring code quality through Continuous Integration (CI). It is inspired by OpenStack's development workflow that has proven to be reliable for fast-changing, interdependent projects driven by large communities.

Read more at http://rdoproject.org/blog/2017/06/introducing-Software-Factory-part-1/

Back to Boston! A recap of the 2017 OpenStack Summit by August Simonelli, Technical Marketing Manager, Cloud

This year the OpenStack ® Summit returned to Boston, Massachusetts. The Summit was held the week after the annual Red Hat ® Summit, which was also held in Boston. The combination of the two events, back to back, made for an intense, exciting and extremely busy few weeks.

Read more at http://redhatstackblog.redhat.com/2017/06/19/back-to-boston-a-recap-of-the-2017-openstack-summit/

View article »

Improving the RDO Trunk infrastructure, take 2

One year ago, we discussed the improvements made to the RDO Trunk infrastructure in this post. As expected, our needs have changed in this year, and so has to change our infrastructure. So here we are, ready to describe what's new in RDO Trunk.

New needs

We have some new needs to cover:

  • A new DLRN API has been introduced, meant to be used by our CI jobs. The main goal behind this API is to break the current long, hardcoded Jenkins pipelines we use to promote repositories, and have individual jobs "vote" on each repository instead, with some additional logic to decide which repository needs to be promoted. The API is a simple REST one, defined here.

  • This new API needs to be accessible for jobs running inside and outside the ci.centos.org infrastructure, which means we can no longer use a local SQLite3 database for each builder.

  • We now have an RDO Cloud available to use, so we can consolidate our systems there.

  • Additionally, hosting our CI-passed repositories in the CentOS CDN was not working as we expected, because we needed some additional flexibility that was just not possible there. For example, we could not remove a repository in case it was promoted by mistake.

Our new setup

This is the current design for the RDO Trunk infrastructure:

New RDO Trunk infrastructure

  • We still have the build server inside the ci.centos.org infrastructure, and not available from the outside. This has proven to be a good solution, since we are separating content generation from content delivery.

  • https://trunk.rdoproject.org is now the URL to be used for all RDO Trunk users. It has worked very well so far, providing enough bandwidth for our needs.

  • The database has been taken out to an external MariaDB server, running on the RDO Cloud (dlrn-db.rdoproject.org). This database is set up as master-slave, with the slave running on an offsite cloud instance that also servers as a backup machine for other services. This required a patch to DLRN to add MariaDB support.

Future steps

Experience tells us that this setup will not stay like this forever, so we already have some plans for future improvements:

  • The build server will migrate to the RDO Cloud soon. Since we are no longer mirroring our CI-passed repositories on the CentOS CDN, it makes more sense to manage it inside the RDO infrastructure.

  • Our next step will be to make RDO Trunk scale horizontally, as described here. We want to use our nodepool VMs in review.rdoproject.org to build packages after each upstream commit is merged, then use the builder instance as an aggregator. That way, the hardware needs for this instance become much lower, since it just has to fetch the generated RPMs and create new repositories. Support for this feature is already in DLRN, so we just need to figure out how to do the rest.

Read More »