RDO Community News

See also blogs.rdoproject.org

RDO's infrastructure server metrics are now available

Reposted from dev@lists.rdoproject.org post by David Moreau Simard

We have historically been monitoring RDO's infrastructure through Sensu and it has served us well to pre-emptively detect issues and maximize our uptime.

At some point, Software Factory grew an implementation of Grafana, InfluxDB and Telegraf in order to monitor the health of the servers, not unlike how upstream's openstack-infra leverages cacti. This implementation was meant to eventually host graphs such as the ones for Zuul and Nodepool upstream.

While there are still details to be ironed out for the Zuul and Nodepool data collection, there was nothing preventing us from just deploying telegraf everywhere just for the general server metrics. It's one standalone package and one configuration file, that's it.

Originally, we had been thinking about feeding the Sensu metric data to Influxdb … but why even bother if it's there for free in Software Factory ? So here we are.

The metrics are now available here We will use this as a foundation to improve visibility into RDO's infrastructure, make it more "open" and accessible in the future.

We're not getting rid of Sensu although we may narrow it's scope to keep some of the more complex service and miscellaneous monitoring that we need to be doing. We'll see what time has in store for us.

Let me know if you have any questions !

View article »

Summary of rdopkg development in 2017

During the year of 2017, 10 contributors managed to merge 146 commits into rdopkg.

3771 lines of code were added and 1975 lines deleted across 107 files.

54 unit tests were added on top of existing 32 tests - an increase of 169 % to total of 86 unit tests.

33 scenarios for 5 core rdopkg features were added in new feature tests spanning total of 228 test steps.

3 minor releases increased version from 0.42 to 0.45.0.

Let's talk about the most significant improvements.

Stabilisation

rdopkg started as a developers' tool, basically a central repository to accumulate RPM packaging automation in a reusable manner. Quickly adding new features was easy, but making sure existing functionality works consistently as code is added and changed proved to be much greater challenge.

As rdopkg started shifting from developers' powertool to a module used in other automation systems, unevitable breakages started to become a problem and prompted me to adapt development accordingly. As a first step, I tried to practice Test-Driven Development (TDD) as opposed to writing tests after a breakage to prevent specific case. Unit tests helped discover and prevent various bugs introduced by new code, but testing complex behaviors was a frustrating experience where most of development time was spent on writing units tests for cases they weren't meant to cover.

Sounds like using a wrong tool for the job, right? And so I opened a rather urgent rdopkg RFE: test actions in a way that doesn't suck and started researching what cool kids use to develop and test python software without suffering.

Behavior-Driven Development

It would seem that cucumber started quite a revolution of Behavior-Driven Development (BDD) and I really like Gherkin, the Business Readable, Domain Specific Language that lets you describe software's behaviour without detailing how that behaviour is implemented. Gherkin serves two purposes — documentation and automated tests.

After some more research on python BDD tools, I liked behave's implementation, documentation and community the most so I integrated it into rdopkg and started using feature tests. They make it easy to describe and define expected behavior before writing code. New features now start with feature scenario which can be reviewed before writing any code. Covering existing behavior with feature tests helps ensuring they are both preserved and well defined/explained/documented. Big thanks goes to Jon Schlueter who contributed huge number of initial feature tests for core rdopkg features.

Here is an example of rdopkg fix scenario:

    Scenario: rdopkg fix
        Given a distgit
        When I run rdopkg fix
        When I add description to .spec changelog
        When I run rdopkg --continue
        Then spec file contains new changelog entry with 1 lines
        Then new commit was created
        Then rdopkg state file is not present
        Then last commit message is:
            """
            foo-bar-1.2.3-3

            Changelog:
            - Description of a change
            """

Proper CI/gating

Thanks to Software Factory, zuul and gerrit, every rdopkg change now needs to pass following automatic gate tests before it can be merged:

  • unit tests (python 2, python 3, Fedora, EPEL, CentOS)
  • feature tests (python 2, python 3, Fedora, EPEL, CentOS)
  • integration tests
  • code style check

In other words, master is now significantly harder to break!

Tests are managed as individual tox targets for convenience.

Paying back the Technical Debt

I tried to write rdopkg code with reusability and future extension in mind, yet in one point of development with big influx of new features/modifications, rdopkg approached critical mass of technical debt where it got into spiral of new functionality breaking existing functionality and with each fix two new bugs surfaced. This kept happening so I stopped adding new stuff and focused on ensuring rdopkg keeps doing what people use it for before extending(breaking) it further. This required quite a few core code refactors, proper integration of features that were hacked in on the clock, as well as leveraging new tools like software factory CI pipeline, and behave described above. But I think it was a success and rdopkg paid its technical debt in 2017 and is ready to face whatever community throws at it in near and far future.

Integration

Join Software Factory project

rdopkg became a part of Software Factory project and found a new home alongside DLRN.

Software Factory is an open source, software development forge with an emphasis on collaboration and ensuring code quality through Continuous Integration (CI). It is inspired by OpenStack's development workflow that has proven to be reliable for fast-changing, interdependent projects driven by large communities. Read more in Introducing Software Factory.

Specifically, rdopkg leverages following Software Factory features:

rdopkg repo is still mirrored to github and bugs are kept in Issues tracker there as well because github is accessible public open space.

Did I meantion you can login to Software Factory using github account?

Finally, big thanks to Javier Peña, who paved the way towards Software Factory with DLRN.

Continuous Integration

rdopkg has been using human code reviews for quite some time, and it proved very useful even though I often +2/+1 my own reviews due to lack of reviewers. However, people unevitably make mistakes. There are decent unit and feature tests now to detect mistakes, so we fight human error with computing power and automation.

Each review and thus each code change to rdopkg is gated - all unit tests, feature tests, integration tests and code style checks need to pass before human reviewers consider accepting the change.

Instead of setting up machines and testing environments and installing requirements and waiting for tests to pass, this boring process is now automated on supported distributions and humans can focus on the changes themselves.

Integration with Fedora, EPEL and CentOS

rdopkg is now finally available directly from Fedora/EPEL repositories, so install instructions on Fedora 25+ systems boiled down to:

dnf install rdopkg

On CentOS 7+, EPEL is needed:

yum install epel-release
yum install rdopkg

Fun fact: to update Fedora rdopkg package, I use rdopkg:

fedpkg clone rdopkg
cd rdopkg
rdopkg new-version -bN
fedpkg mockbuild
# testing
fedpkg push
fedpkg build
fedpkg update

So rdopkg is officially packaging itself while also being packaged by itself.

Please nuke jruzicka/rdopkg copr if you were using it previously, it is now obsolete.

Documentation

rdopkg documentation was cleand up, proof-read, extended with more details and updated with latest information and links.

Feature scenarios are now available as man pages thanks to mhu.

Packaging and Distribution

Python 3 compatibility

By popular demand, rdopkg now supports Python 3. There are Python 3 unit tests and python3-rdopkg RPM package.

Adopt pbr for Versioning

Most of initial patches rdopkg was handling in the very beginning were related to distutils and pbr, the OpenStack packaging meta-library, specifically making it work on a distribution with integrated package management and old conservative packages.

Amusingly, pbr was integrated into rdopkg (well, it actually does solve some problems aside from creating new ones) and in order to release the new rdopkg version with pbr on CentOS/EPEL 7, I had to disable hardcoded pbr>=2.1.0 checks on update of python-pymod2pkg because older version of pbr is available from EPEL 7. I removed the check (in two different places) as I did so many times before and it works fine.

As a tribute to all the fun I had with pbr and distutils, here is a link to my first nuke bogus requirements patch of 2018.

Aside from being consistent with OpenStack related projects, rdopkg adopted strict sematic versioning that pbr uses, which means that releases are always going to have 3 version numbers from now on:

0.45 -> 0.45.0
1.0  -> 1.0.0

And More!

Aside from the big changes mentioned above, large amount of new feature tests and numerous not-so-exciting fixes, here is a list of changes might be worth mentioning:

  • unify rdopkg patch and rdopkg update-patches and use alias
  • rdopkg pkgenv shows more information and better color coding for easy telling of a distgit state and branches setup
  • preserve Change-Id when amending a commit
  • allow fully unattended runs of core actions.
  • commit messages created by all rdopkg actions are now clearer, more consistent and can be overriden using -H/--commit-header-file.
  • better error messages on missing patches in all actions
  • git config can be used to override patches remote, pranch, user name and email
  • improved handling of patches_base and patches_ignore including tests
  • improved handling of %changelog
  • improved new/old patcehs detection
  • improved packaging as suggested in Fedora review
  • improved naming in git and specfile modules
  • properly handle state files
  • linting cleanup and better code style checks
  • python 3 support
  • improve unicode support
  • handle VX.Y.Z tags
  • split bloated utils.cmd into utils.git module
  • merge legacy rdopkg.utils.exception so there is only single module for exceptions now
  • refactor unreasonable default atomic=False affecting action definitions
  • remove legacy rdopkg coprbuild action

Thank you, rdopkg community!

View article »

RDO Community Blogposts

If you've missed out on some of the great RDO Community content over the past few weeks while you were on holiday, not to worry. I've gathered the recent blogposts right here for you. Without further ado…

New TripleO quickstart cheatsheet by Carlos Camacho

I have created some cheatsheets for people starting to work on TripleO, mostly to help them to bootstrap a development environment as soon as possible.

Read more at http://anstack.github.io/blog/2018/01/05/tripleo-quickstart-cheatsheet.html

Using Ansible for Fernet Key Rotation on Red Hat OpenStack Platform 11 by Ken Savich, Senior OpenStack Solution Architect

In our first blog post on the topic of Fernet tokens, we explored what they are and why you should think about enabling them in your OpenStack cloud. In our second post, we looked at the method for enabling these.

Read more at https://redhatstackblog.redhat.com/2017/12/20/using-ansible-for-fernet-key-rotation-on-red-hat-openstack-platform-11/

Automating Undercloud backups and a Mistral introduction for creating workbooks, workflows and actions by Carlos Camacho

The goal of this developer documentation is to address the automated process of backing up a TripleO Undercloud and to give developers a complete description about how to integrate Mistral workbooks, workflows and actions to the Python TripleO client.

Read more at http://anstack.github.io/blog/2017/12/18/automating-the-undercloud-backup-and-mistral-workflows-intro.html

Know of other bloggers that we should be including in these round-ups? Point us to the articles on Twitter or IRC and we'll get them added to our regular cadence.

View article »

Blog Round-up

It's time for another round-up of the great content that's circulating our community. But before we jump in, if you know of an OpenStack or RDO-focused blog that isn't featured here, be sure to leave a comment below and we'll add it to the list.

ICYMI, here's what has sparked the community's attention this month, from Ansible to TripleO, emoji-rendering, and more.

TripleO and Ansible (Part 2) by slagle

In my last post, I covered some of the details about using Ansible to deploy with TripleO. If you haven’t read that yet, I suggest starting there: http://blog-slagle.rhcloud.com/?p=355

Read more at http://blog-slagle.rhcloud.com/?p=369

TripleO and Ansible deployment (Part 1) by slagle

In the Queens release of TripleO, you’ll be able to use Ansible to apply the software deployment and configuration of an Overcloud.

Read more at http://blog-slagle.rhcloud.com/?p=355

An Introduction to Fernet tokens in Red Hat OpenStack Platform by Ken Savich, Senior OpenStack Solution Architect

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.

Read more at https://redhatstackblog.redhat.com/2017/12/07/in-introduction-to-fernet-tokens-in-red-hat-openstack-platform/

Full coverage of libvirt XML schemas achieved in libvirt-go-xml by Daniel Berrange

In recent times I have been aggressively working to expand the coverage of libvirt XML schemas in the libvirt-go-xml project. Today this work has finally come to a conclusion, when I achieved what I believe to be effectively 100% coverage of all of the libvirt XML schemas. More on this later, but first some background on Go and XML…

Read more at https://www.berrange.com/posts/2017/12/07/full-coverage-of-libvirt-xml-schemas-achieved-in-libvirt-go-xml/

Full colour emojis in virtual machine names in Fedora 27 by Daniel Berrange

Quite by chance today I discovered that Fedora 27 can display full colour glyphs for unicode characters that correspond to emojis, when the terminal displaying my mutt mail reader displayed someone’s name with a full colour glyph showing stars:

Read more at https://www.berrange.com/posts/2017/12/01/full-colour-emojis-in-virtual-machine-names-in-fedora-27/

Booting baremetal from a Cinder Volume in TripleO by higginsd

Up until recently in TripleO booting, from a cinder volume was confined to virtual instances, but now thanks to some recent work in ironic, baremetal instances can also be booted backed by a cinder volume.

Read more at http://goodsquishy.com/2017/11/booting-baremetal-from-a-cinder-volume-in-tripleo/

View article »

Gate repositories on Github with Software Factory and Zuul3

Introduction

Software Factory is an easy to deploy software development forge. It provides, among others features, code review and continuous integration (CI). The latest Software Factory release features Zuul V3 that provides integration with Github.

In this blog post I will explain how to configure a Software Factory instance, so that you can experiment with gating Github repositories with Zuul.

First we will setup a Github application to define the Software Factory instance as a third party application and we will configure this instance to act as a CI system for Github.

Secondly, we will prepare a Github test repository by:

  • Installing the application on it
  • configuring its master branch protection policy
  • providing Zuul job description files

Finally, we will configure the Software Factory instance to test and gate Pull Requests for this repository, and we will validate this CI by opening a first Pull Request on the test repository.

Note that Zuul V3 is not yet released upstream however it is already in production, acting as the CI system of OpenStack.

Pre-requisite

A Software Factory instance is required to execute the instructions given in this blog post. If you need an instance, you can follow the quick deployment guide in this previous article. Make sure the instance has a public IP address and TCP/443 is open so that Github can reach Software Factory via HTTPS.

Application creation and Software Factory configuration

Let's create a Github application named myorg-zuulapp and register it on the instance. To do so, follow this section from Software Factory's documentation.

But make sure to:

  • Replace fqdn in the instructions by the public IP address of your Software Factory instance. Indeed the default sftests.com hostname won't be resolved by Github.
  • Check "Disable SSL verification" as the Software Factory instance is by default configured with a self-signed certificate.
  • Check "Only on this account" for the question "Where can this Github app be installed".

Configuration of the app part 1 Configuration of the app part 2 Configuration of the app part 3

After adding the github app settings in /etc/software-factory/sfconfig.yaml, run:

sudo sfconfig --enable-insecure-slaves --disable-fqdn-redirection

Finally, make sure Github.com can contact the Software Factory instance by clicking on "Redeliver" in the advanced tab of the application. Having the green tick is the pre-requisite to go further. If you cannot get it, the rest of the article will not be able to be accomplished successfuly.

Configuration of the app part 4

Define Zuul3 specific Github pipelines

On the Software Factory instance, as root, create the file config/zuul.d/gh_pipelines.yaml.

cd /root/config
cat <<EOF > zuul.d/gh_pipelines.yaml
---
- pipeline:
    name: check-github.com
    description: |
      Newly uploaded patchsets enter this pipeline to receive an
      initial +/-1 Verified vote.
    manager: independent
    trigger:
      github.com:
        - event: pull_request
          action:
            - opened
            - changed
            - reopened
        - event: pull_request
          action: comment
          comment: (?i)^\s*recheck\s*$
    start:
      github.com:
        status: 'pending'
        status-url: "https://sftests.com/zuul3/{tenant.name}/status.html"
        comment: false
    success:
      github.com:
        status: 'success'
      sqlreporter:
    failure:
      github.com:
        status: 'failure'
      sqlreporter:

- pipeline:
    name: gate-github.com
    description: |
      Changes that have been approved by core developers are enqueued
      in order in this pipeline, and if they pass tests, will be
      merged.
    success-message: Build succeeded (gate pipeline).
    failure-message: Build failed (gate pipeline).
    manager: dependent
    precedence: high
    require:
      github.com:
        review:
          - permission: write
        status: "myorg-zuulapp[bot]:local/check-github.com:success"
        open: True
        current-patchset: True
    trigger:
      github.com:
        - event: pull_request_review
          action: submitted
          state: approved
        - event: pull_request
          action: status
          status: "myorg-zuulapp[bot]:local/check-github.com:success"
    start:
      github.com:
        status: 'pending'
        status-url: "https://sftests.com/zuul3/{tenant.name}/status.html"
        comment: false
    success:
      github.com:
        status: 'success'
        merge: true
      sqlreporter:
    failure:
      github.com:
        status: 'failure'
      sqlreporter:
EOF
sed -i s/myorg/myorgname/ zuul.d/gh_pipelines.yaml

Make sure to replace "myorgname" by the organization name.

git add -A .
git commit -m"Add github.com pipelines"
git push git+ssh://gerrit/config master

Setup a test repository on Github

Create a repository called ztestrepo, initialize it with an empty README.md.

Install the Github application

Then follow the process below to add the application myorg-zuulapp to ztestrepo.

  1. Visit your application page, e.g.: https://github.com/settings/apps/myorg-zuulapp/installations
  2. Click “Install”
  3. Select ztestrepo to install the application on
  4. Click “Install”

Then you should be redirected on the application setup page. This can be safely ignored for the moment.

Define master branch protection

We will setup the branch protection policy for the master branch of ztestrepo. We want a Pull Request to have, at least, one code review approval and all CI checks passed with success before a PR become mergeable.

You will see, later in this article, that the final job run and the merging phase of the Pull Request are ensured by Zuul.

  1. Go to https://github.com/myorg/ztestrepo/settings/branches
  2. Choose the master branch
  3. Check "Protect this branch"
  4. Check "Require pull request reviews before merging"
  5. Check "Dismiss stale pull request approvals when new commits are pushed"
  6. Check "Require status checks to pass before merging"
  7. Click "Save changes"

Attach the application

Add a collaborator

A second account on Github is needed to act as collaborator of the repository ztestrepo. Select one in https://github.com/myorg/ztestrepo/settings/collaboration. This collaborator will act as the PR reviewer later in this article.

Define a Zuul job

Create the file .zuul.yaml at the root of ztestrepo.

git clone https://github.com/myorg/ztestrepo.git
cd ztestrepo
cat <<EOF > .zuul.yaml
---
- job:
    name: myjob-noop
    parent: base
    description: This a noop job
    run: playbooks/noop.yaml
    nodeset:
      nodes:
        - name: test-node
          label: centos-oci

- project:
    name: myorg/ztestrepo
    check-github.com:
      jobs:
        - myjob-noop
    gate-github.com:
      jobs:
        - myjob-noop
EOF
sed -i s/myorg/myorgname/ .zuul.yaml

Make sure to replace "myorgname" by the organization name.

Create playbooks/noop.yaml.

mkdir playbooks
cat <<EOF > playbooks/noop.yaml
- hosts: test-node
  tasks:
    - name: Success
      command: "true"
EOF

Push the changes directly on the master branch of ztestrepo.

git add -A .
git commit -m"Add zuulv3 job definition"
git push origin master

Register the repository on Zuul

At this point, the Software Factory instance is ready to receive events from Github and the Github repository is properly configured. Now we will tell Software Factory to consider events for the repository.

On the Software Factory instance, as root, create the file myorg.yaml.

cd /root/config
cat <<EOF > zuulV3/myorg.yaml
---
- tenant:
    name: 'local'
    source:
      github.com:
        untrusted-projects:
          - myorg/ztestrepo
EOF
sed -i s/myorg/myorgname/ zuulV3/myorg.yaml

Make sure to replace "myorgname" by the organization name.

git add zuulV3/myorg.yaml && git commit -m"Add ztestrepo to zuul" && git push git+ssh://gerrit/config master

Create a Pull Request and see Zuul in action

  1. Create a Pull Request via the Github UI
  2. Wait the for check-github.com pipeline to finish with success

Check test

  1. Ask the collaborator to set his approval on the Pull request

Approval

  1. Wait for Zuul to detect the approval
  2. Wait the for gate-github.com pipeline to finish with success

Gate test

  1. Wait for for the Pull Request to be merged by Zuul

Merged

As you can see, after the run of the check job and the reviewer's approval, Zuul has detected that the state of the Pull Request was ready to enter the gating pipeline. During the gate run, Zuul has executed the job against the Pull Request code change rebased on the current master then made Github merge the Pull Request as the job ended with a success.

Other powerful Zuul features such as cross-repository testing or Pull Request dependencies between repositories are supported but beyond the scope of this article. Do not hesitate to refer to the upstream documentation to learn more about Zuul.

Next steps to go further

To learn more about Software Factory please refer to the upstream documentation. You can reach the Software Factory team on IRC freenode channel #softwarefactory or by email at the softwarefactory-dev@redhat.com mailing list.

View article »

Open Source Summit, Prague

In October, RDO had a small presence at the Open Source Summit (formerly known as LinuxCon) in Prague, Czechia.

While this event does not traditionally draw a big OpenStack audience, we were treated to a great talk by Monty Taylor on Zuul, and Fatih Degirmenci gave an interesting talk on cross-community CI, in which he discussed the joint work between the OpenStack and OpenDaylight communities to help one another verify cross-project functionality.

centos_fedora On one of the evenings, members of the Fedora and CentOS community met in a BoF (Birds of a Feather) meeting, to discuss how the projects relate, and how some of the load - including the CI work that RDO does in the CentOS infrastructure - can better be shared between the two projects to reduce duplication of effort.

This event is always a great place to interact with other open source enthusiasts. While, in the past, it was very Linux-centric, the event this year had a rather broader scope, and so drew people from many more communities.

Upcoming Open Source Summits will be held in Japan (June 20-22, 2018), Vancouver (August 29-31, 2018) and Edinburgh (October 22-24, 2018), and we expect to have a presence of some kind at each of these events.

View article »

Upcoming changes to test day

TL;DR: Live RDO cloud will be available for testing on upcoming test day. http://rdoproject.org/testday/queens/milestone2/ for more info.

The last few test days have been somewhat lackluster, and have not had much participation. We think that there's a number of reasons for this:

  • Deploying OpenStack is hard and boring
  • Not everyone has the necessary hardware to do it anyways
  • Automated testing means that there's not much left for the humans to do

In today's IRC meeting, we were brainstorming about ways to improve participation in test day.

We think that, in addition to testing the new packages, it's a great way for you, the users, to see what's coming in future releases, so that you can start thinking about how you'll use this functionality.

One idea that came out of it is to have a test cloud, running the latest packages, available to you during test day. You can get on there, poke around, break stuff, and help test it, without having to go through the pain of deploying OpenStack.

David has written more about this on his blog.

If you're interested in participating, please sign up.

Please also give some thought to what kinds of test scenarios we should be running, and add those to the test page. Or, respond to this thread with suggestions of what we should be testing.

Details about the upcoming test day may be found on the RDO website.

Thanks!

View article »

Getting started with Software Factory and Zuul3

Introduction

Software Factory 2.7 has been recently released. Software Factory is an easy to deploy software development forge that is deployed at review.rdoproject.org and softwarefactory-project.io. Software Factory provides, among other features, code review and continuous integration (CI). This new release features Zuul V3 that is, now, the default CI component of Software Factory.

In this blog post I will explain how to deploy a Software Factory instance for testing purposes in less than 30 minutes and initialize two demo repositories to be tested via Zuul.

Note that Zuul V3 is not yet released upstream however it is already in production, acting as the CI system of OpenStack.

Prerequisites

Software Factory requires CentOS 7 as its base Operating System so the commands listed below should be executed on a fresh deployment of CentOS 7.

The default FQDN of a Software Factory deployment is sftests.com. In order to be accessible in your browser, sftests.com must be added to your /etc/hosts with the IP address of your deployment.

Installation

First, let's install the repository of the last version then install sf-config, the configuration management tool.

sudo yum install -y https://softwarefactory-project.io/repos/sf-release-2.7.rpm
sudo yum install -y sf-config

Activating extra components

Software Factory has a modular architecture that can be easily defined through a YAML configuration file, located in /etc/software-factory/arch.yaml. By default, only a limited set of components are activated to set up a minimal CI with Zuul V3.

We will now add the hypervisor-oci component to configure a container provider, so that OCI containers can be consumed by Zuul when running CI jobs. In others words it means you won't need an OpenStack cloud account for running your first Zuul V3 jobs with this Software Factory instance.

Note that the OCI driver, on which hypervisor-oci relies, while totally functional, is still under review and not yet merged upstream.

echo "      - hypervisor-oci" | sudo tee -a /etc/software-factory/arch.yaml

Starting the services

Finally run sf-config:

sudo sfconfig --enable-insecure-slaves --provision-demo

When the sf-config command finishes you should be able to access the Software Factory web UI by connecting your browser to https://sftests.com. You should then be able to login using the login admin and password userpass (Click on "Toggle login form" to display the built-in authentication).

Triggering a first job on Zuul

The –provision-demo option is a special command to provision two demo Git repositories on Gerrit with two demo jobs.

Let's propose a first change on it:

sudo -i
cd demo-project
touch f1 && git add f1 && git commit -m"Add a test change" && git review

Then you should see the jobs being executed on the ZuulV3 status page.

Zuul buildset

And get the jobs' results on the corresponding Gerrit review page.

Gerrit change

Finally, you should find the links to the generated artifacts and the ARA reports.

ARA report

Next steps to go further

To learn more about Software Factory please refer to the user documentation. You can reach the Software Factory team on IRC freenode channel #softwarefactory or by email at the softwarefactory-dev@redhat.com mailing list.

View article »

A summary of Sydney OpenStack Summit docs sessions

Here I'd like to give a summary of the Sydney OpenStack Summit docs sessions that I took part in, and share my comments on them with the broader OpenStack community.

Docs project update

At this session, we discussed a recent major refocus of the Documentation project work and restructuring of the OpenStack official documentation. This included migrating documentation from the core docs suite to project teams who now own most of the content.

We also covered the most important updates from the Documentation planning sessions held at the Denver Project Teams Gathering, including our new retention policy for End-of-Life documentation, which is now being implemented.

This session was recorded, you can watch the recording here:

Docs/i18n project onboarding

This was a session jointly organized with the i18n community. Alex Eng, Stephen Finucane, and yours truly gave three short presentations on translating OpenStack, OpenStack + Sphinx in a tree, and introduction to the docs community, respectively.

As it turned out, the session was not attended by newcomers to the community, instead, community members from various teams and groups joined us for the onboarding, which made it a bit more difficult to find out what the proper focus of the session should be to better accommodate the different needs and expectations of those in the audience. Definitely something to think about for the next Summit.

Installation guides updates and testing

I held this session to identify what are the views of the community on the future of installation guides and testing of installation procedures.

The feedback received was mostly focused on three points:

  • A better feedback mechanism for new users who are the main audience here. One idea is to bring back comments at the bottom of install guides pages.

  • To help users better understand the processes described in instructions and the overall picture, provide more references to conceptual or background information.

  • Generate content from install shell scripts, to help with verification and testing.

The session etherpad with more details can be found here:

Ops guide transition and maintenance

This session was organized by Erik McCormick from the OpenStack Operators community. There is an ongoing effort driven by the Ops community to migrate retired OpenStack Ops docs over to the OpenStack wiki, for easy editing.

We mostly discussed a number of challenges related to maintaining the technical content in wiki, and how to make more vendors interested in the effort.

The session etherpad can be found here:

Documentation and relnotes, what do you miss?

This session was run by Sylvain Bauza and the focus of the discussion was on identifying gaps in content coverage found after the documentation migration.

Again, Ops-focused docs tuned out to be a hot topic as well as providing more detailed conceptual information together with the procedural content, and structuring of release notes. We should also seriously consider (semi-)automating checks for broken links.

You can read more about the discussion points here:

View article »