RDO Community News

See also blogs.rdoproject.org

What did you do in Mitaka? Adam Young

In this installment of our "What did you do in Mitaka" series, I speak to Adam Young about Keystone.

(If the above player doesn't work for you, please download the audio HERE.)

Rich: I'm speaking with Adam Young , who's been working on the Keystone project for the last 4 years or so.

So, Adam, what did you do in Mitaka?

Adam: This release, I finally got a couple key features into Keystone for dealing with some security issues that have bothered me for a while. The biggest one is the ability to deal with admin - the fact that if you're admin somewhere you end up being admin everywhere.

This is bug 968696. Yes, I have the number [memorized]. I even had tshirts made up for it. We're finally closing that bug. Although closing it properly will take several releases.

The issue with bug 968696 is that we have no way of splitting out what certain APIs are that are supposed to be admin-specific. Like, you can see that there's a difference between adding a new hypervisor, and going into a project and creating a virtual machine. They are different levels of administration. There are certain APIs that need to have admin, that are really project-scope things. Things like adding a user to a project - role assignment - that's admin level at the project, as opposed to admin level at the overall service.

So what we did is we put a config option in Keystone which lets you say that a certain project is the adminstrative project. Tokens that are issued for a user that are scoped to that project have an additional field on them. is_admin_project. This can be enforced in policy across OpenStack. It does mean that we need to rewrite the policy files. We knew this was going to be a long slog.

There's a way of now being able to say on a given policy rule, this is not just that they have to have the admin role, but that they have to have the admin role on a project that's considered the admin project.

It also allows you to say, if I have admin on the admin project, I can go into a non-admin project, and do these administrative type things. But somebody who is admin on some other common project, some other regular project, does not have the ability to do that. So the projects continue to be the main scoping for access control, but we have this level of defining things to be cloud- or service-level administration.

So that was one of the big ones. We'll continue to drive on with using that - making that policy change throughout Nova, Glance, and the rest, because if they're not enforcing it, then it is still a global admin. So there's more work to be done there.

The other big feature falls into a similar type of problem-set, which is, roles are very, very coarse grained. If you look at the early days of OpenStack, there really was only one role, which was admin. And then everybody else was a member of a project.

By the time I started, which was four years ago, there already was the idea that a role assignment was for a role on a project. Role assignments were no longer global, even though they had been in the past. A lot of people still kept treating them like they were global things, but the mechanism scoped it to a project. What we wanted to be able to is say, I want to be able to have a different role for different operations - different workflows. So that I can assign small things, and I can delegate smaller pieces of what I need to do to a user.

A good example is that I might not want somebody to be able to create a virtual machine, or some offline service to be able to create a virtual machine on by behalf, but I want them to be able to reboot one of mine. They can monitor, and if it locks up they should be able to reboot that virtual machine.

So what we have is this concept of an implied role, or role inference rule.

The first one, and the example that everyone kind of gets is, if I'm an admin on a project, I should be a member on that project. So now, when I have a token and I reqest admin on there, I will see both the admin role and the member role inside that project, even if I don't have the member role explicitly assigned to me.

Now in the API I could specify, you only need the member role to do this. So now we've just made it easier to roll up finer-grained permissions to a more coarse-grained role assignment.

This, it turned out, was a feature that somebody else could use to implement something they were talking about doing. Henry Nash, who works at IBM, had this proposal for domain-specific roles. He wanted to be able to have somebody who's in some organization, specify what the name of the role was that they gave to people, and it also included a subset of what the person could do there.

I said, that really sounds like a role inference rule, but one where a) the top-level name is assigned by somebody who's not a global cloud administrator, the domain administrator, and b) it doesn't show up in the token. I don't want domain-specific roles to be something that people are enforcing policy on. I want it to be [only enforce on] the global roles.

So we worked together and came up with this idea that we get implied roles in first, and then we would build the domain-specific roles on top of that.

It took a lot of collaboration - a lot of arguing and a lot of understanding … to explain to the other person what our problems were, and coming to the idea that a very elegant mechanism here could be the baseline for both features.

So we're presenting on both of those things together, actually, on Tuesday at Summit. (OpenStack Summit, Austin - video here ) The Nash and Young reunion tour. We couldn't get either Stills or Crosby to show up.

Beyond that, Keystone is, as somebody once described it, performance art. Getting Keystone features in requires a lot of discussion. It's really fundamental to OpenStack that Keystone just work.

A lot of people will file bugs against Keystone, because it's the first thing that reports a failure. Or someone reports a failure trying to talk to Keystone. So a lot of time has been spent in troubleshooting other people's problems, connectivity problems, configuration problems, and getting it so that they understand, yes, you can't talk to Keystone, but it's because you have not configured your service to talk to Keystone.

And one of the big victories that we had is that a whole class of errors that we had were due to threading issues in eventlet, and eventlet is dead. Eventlet has been deprecated for a while. In TripleO we have it so that Keystone is not being deployed in eventlet any more. This means that we don't have the threading issues there.

A big lesson that we've learned is that the cloud deployer does not necessarily own the user database. And for large organizations, there already is a well-established user database, and so we want to reuse … a lot of people point to stuff like Facebook, and Google, and LinkedIn, as public authentication services now. OpenID, OAuth, and all those, are protocols that make it possible to consume these big databases of identity, and then you can use your same credentials that you use to talk to Facebook, to talk to your apps.

Well, this same pattern happens a lot inside the enterprise, and in fact it's the norm. It's called Single Sign On, and a lot of people want it. And pushing Single Sign On technologies, and making those work within Keystone has been a long-running task.

There's been a lot of LDAP debugging. It's still one of the main pain points for people. LDAP is a primary tool for most large organizations for managing the users. The D in there stands for Directory and it's user directory for large organizations. A lot of people have to make it work with Active Directory, Microsoft being so dominant in the enterprise, and Active Directory being their solution for that has made it a pain point. So one of the big things we've been trying to do is make that integrate in better with existing deployments.

Over multiple releases we had this idea that first you had one monolithic identity back-end. And then we split that out into two pieces, one which was the identity, the users and the groups, and the other which was the assignment - what roles you had on each project.

Now that we can vary those two things differently, what we started doing is saying, how about having multiple things on that identity side? Federation kind of works in there but before that we actually said you could have SQL back your user database, and that would be your basic install. All the service users for Keystone, and Nova and so on would be put in there, and then let's make a separate domain and put LDAP in there. So the big push has been to do that.

Well it turns out that in order to do that we need everybody to use the Keystone V3 API, because they need domain support. Specifically internally, the different services need to do this.

A lot of people have helped make this happen. This has been a multi-project effort because not only do we need to solve it within all the Keystone components, but then there are places where one service calls into another and needs to authenticate with Keystone and those calls there are calling through client network access.

So we're finally at a place where V3 Keystone API everywhere is a reality, or can be a reality, and so getting the pain of LDAP into its own domain has been an ongoing theme here. I really feel it's going to make things work the way people want them to work.

One of the nice benefits of this, especially for new deployments, is they no longer have to put service users into LDAP. This was a dealbreaker for a lot of people. LDAP is a read-only resource. If, say, the OpenStack user does not own the user database, that includes service users. So being able to have service users in a small database inside Keystone, and have consume the rest of the identity from LDAP has always been the dream.

Keystone moves very slowly. It's a very cautious project, and it has to be.

Rich: Because it has to just work.

Adam: It has to just work, and it's the target for security. If you can hack Keystone, then you have access to the entire cloud. And if you're using LDAP, you have the potential to provide access to things beyond the cloud.

R: Tell me about some things that are coming in upcoming releases.

A: One that I'm looking forward to in Newton is the ability to unify all the delegation mechanisms. What we're doing with trusts, which is something I built specifically for Heat a couple of releases ago, and have taken on this way of a user being able to delegate another user a subset of their permissions, and to be able to delegate the ability to delegate. You don't have to be an admin in order to create some sort of delegation, but you can be an everyday user and hand off, to maybe a scheduled task, to do something on your behalf, at midnight when you're on vacation.

The other one that I'm really looking forward to trying to drive on is the adoption of Fernet. We were hoping to get that into the Mitaka release. Fernet is in there, but trying to make it the default.

Fernet is an ephemeral token. The way that tokens work with Keystone now … it's a UUID. It's a claim check that you can hand to somebody and they can go back to Keystone, assuming they have authority to do this, and say, what data is associated with this token?

Well, Keystone has to record that.

If you have two different Keystone servers, and they're both issuing out tokens, they have to talk to the same database. Because I might get one issued from one, and go to a different one to validate it.

This is a scalability issue.

PKI tokens, which I wrote many releases ago, were a first attempt to deal with this, because you could have different signers. But PKI tokens had a lot of problems. Dolph and Lance, who are Rackspacers, who have been long-term contributors, came up with this implementation called Fernet tokens. The idea is that you use symmetric cryptography. You take the absolute minimal amount of data you need to reconstitute the token and you sign it. Now, the Keystone server is the only thing that has this key. So that means it's the only thing that can decrypt it. So you still need to pass the token back to Keystone to validate it, but Keystone no longer has to go to a database to look this up. It can now take the Fernet token itself, and say here's what it means. And if you have two different Keystone servers, and they share keys, you can get one issued from one, and have the other one validate it for you.

R: Well, thank you again for taking the time to do this.

A: My pleasure.

View article »

What did you do in Mitaka? David Moreau Simard

Continuing our series "What Did You Do In Mitaka?", I spoke with David Moreau Simard.

(If the above player doesn't work for you, please download the recording HERE.)

Rich: I'm speaking with David Moreau Simard, who is one of the engineers who works on RDO at Red Hat. Thanks for taking time to speak with me.

DMSimard: It's a pleasure

R: What did you work on in Mitaka?

D: I'm part of the RDO engineering team at Red Hat. My role is mostly around continuous integration and making sure that RDO works. This means that some days I could be implementing new ways of testing RDO or some other days fixing bugs identified by our CI.

As far as Mitaka is concerned, we got Packstack to a point where it's able to install and run Tempest against itself. This is awesome because it's a great way to make sure everything works properly, sort of a sanity check.

This improvement allowed us to make Packstack gate against itself upstream to prevent regressions from merging by testing each commit. This was part of a plan to make Packstack part of our testing pipeline.

In Mitaka, we started using a new project called WeIRDO. It's meant to use upstream gate integration tests inside our own RDO testing pipeline. We really improved our testing coverage in Mitaka with TripleO quickstart and WeIRDO. With WeIRDO, we're able to easily test RDO trunk packages against Packstack and Puppet-OpenStack and the jobs that they provide.

We built a great relationship with projects that consume RDO packages throughout development cycles. We mutually benefit from essentially testing each other. So it makes a lot of sense to stay in touch with these guys.

Overall, I can really tell how the testing coverage for RDO has improved tremendously, even since liberty. Mitaka is really the best and most tested release of RDO yet. I want to extend my thanks to everyone that was involved in making it happen.

R: So how about in Newton? What do you think is coming in Newton for you?

D: So, Mitaka was already an awesome cycle from the perspective of testing coverage that we had for RDO. We want to keep working on that foundation to increase the coverage further. I also want to spend some time on improving monitoring and automation to make troubleshooting problems easier.

Trunk breaks all the time, and so it takes a lot of time to troublshoot that, and we want to make that easier. But also have visibility into the problems that could be coming.

Earlier I talked about WeIRDO and how we use upstream gate jobs in our testing pipeline. In Newton, we want to add Kolla back into our testing pipeline through WeIRDO. I had some issues getting Kolla to work reliably in Mitaka but they've fixed some problems upstream since then so we'll take another look at it.

Just recently, there's also the Chef OpenStack community – they've been working on integration testing their cookbooks in the gate. I'm definitely interested in seeing what they have, and possibly leveraging their work once they're ready.

The other large chunk of work will probably come from our effort in streamlining and centralizing the different components of RDO. Right now, we're a bit all over the place… But we've already started working towards moving to a software factory instance. Software factory is a project to provide upstream's testing ecosystem with Git, Zuul, Nodepool, Gerrit and Jenkins.. but in a box, an appliance.

This will definitely make things easier for everyone since everything from RDO will be expected to be in this one place.

R: Thank you very much for your time.

D: I appreciate yours, and it was great working on this great release.

David is dmsimard on Twitter, github, and on IRC.

View article »

What did you do in Mitaka: Emilien Macchi

Continuing my series of interviews with RDO engineers, here's my chat with Emilien Macchi, who I talked with at OpenStack Summit last week in Austin.

If you'd like to talk about what you did in Mitaka, please contact me at rbowen@redhat.com.

(If the above player doesn't work for you, please download the audio HERE.

Rich: I'm speaking with Emilien Macchi, who is one of the people that worked on the Mitaka release of OpenStack. Thank you for taking time to speak with me.

Emilien: Thank you for welcoming me.

R: What did you do in Mitaka?

E: In Mitaka, most of my tasks were related to packaging testing. I'm currently leading the Puppet OpenStack group, which is a team where we develop some Puppet modules that deploy OpenStack in production. Red Hat is using the Puppet modules in TripleO, so we can automate the deployment of OpenStack services. Recently we switched the gates of OpenStack Infra to use RDO packaging. We had a lot of issued before because we didn't have much CI. But since Mitaka, the RDO and the Puppet teams worked a lot together to make sure that we can provide some testing together. We can fix things very early when it's broken upstream. Like when we have a new dependency or when we have a change in the requirements. That was this kind of test where you provide very early feedback to the RDO people that are doing the packaging. It was very interesting to work with RDO.

We provided early feedback on the builds, on the failures. We also contributed to … I say "we", I don't like to say "I" … we, my group, my Puppet group, we contributed to the new packages that we have in Mitaka. We updated the spec in the package so we could have more features, and more services in the RDO projects.

Testing! Testing every day. Testing, testing, testing. Participating in the events that the RDO community is having, like the weekly meetings, the testing days. It was very interesting for us to have this relationship with the RDO team.

So I'm looking forward to Newton.

R: What do you anticipate you'll be working on in Newton?

E: Well, that's an interesting question. We are talking about this week. [At OpenStack Summit, Austin.]

The challenge for us … the biggest challenge, I guess, is to follow the release velocity of upstream projects. We have a bunch of projects in OpenStack, and it's very hard to catch up with all of the projects at the same time. In the meantime we have to because those projects, they are in the RDO project and also in the products, so that's something we need to … we need to scale the way that we producing all this work.

Something we are very focused on right now is to use the Software Factory project, which is how OpenStack Infra is operating to build OpenStack software. This is a project - an open source project - which is now used by RDO. We can have this same way as upstream, but for downstream - we can have the same way to produce the RDO artifacts, like the packages and the Puppet modules, and so on. That's the next big challenge, is to be fully integrated with Software Factory, to implement the missing features we will need to make the testing better.

Yeah, that's the biggest challenge: Improve the testing, and stopping doing manual things, automate all the things, that is the next challenge.

R: Thank you very much. Thanks for your time.

E: Thank you for your time.

View article »

[Announcement] Migration to review.rdoproject.org

After releasing RDO Mitaka, we migrated our current packaging workflow from gerrithub to a brand-new platform hosted at https://review.rdoproject.org right before the Newton Summit. During the last cycle, we've worked with Software Factory folks to build a new packaging workflow based on the very same foundations as upstream OpenStack (using gerrit, zuul, nodepool). We're now relying on RPMFactory which is a specialization of Software Factory fitted for RPM packaging needs.

The migration was planned in order to satisfy some criteria:

  • providing a consistent user experience to RDO contributors
  • simplify and streamline our workflow
  • consolidate resources

Workflow changes

So now, packages are maintained through https://review.rdoproject.org and packages dist-git are replicated to the rdo-packages github organization. The rdopkg utility has been updated to support the new workflow with new commands: clone, review-patch and review-spec.

Anyone can submit changes for the review, which can be approved by the following groups:

  • rdo-provenpackager much like Fedora provenpackagers can +2 and +W changes for all projects.
  • PROJECT-core: package maintainers listed in rdoinfo can +2 and +W changes for their projects.

Howto

We're working to refresh documentation, but here's a short howto to hack on packaging:

  1. dnf install -y rdopkg
  2. rdopkg clone [-u ] This will create the following remotes
    • origin: dist-git
    • patches: upstream sources at current package version
    • upstream: upstream sources
    • review-origin: used to interact with RPMFactory
    • review-patches: used to interact with RPMFactory
  3. modify packaging
  4. rdopkg review-spec
  5. review then merge

For downstream patches, the workflow is a bit different as we don't merge patches but keep reviews open in order to keep the whole patch chain history. rdopkg has been updated to behave properly in that aspect.

We still have improvements coming but no major changes to this workflow, if you have any questions, feel free to ping us on irc (#rdo @freenode) or the mailing list.

Regards, The RDO Eng. team

View article »

Technical definition of done

Before releasing Mitaka, we agreed on a tecnical definition of done1. This can always evolve to add more coverage, but this is what we currently test when deciding whether a release is ready from a technical perspective:

  • Packstack all-in-one deployments testing the 3 upstream scenarios
  • TripleO single contoller and single compute deployment validated with Tempest smoke tests
  • TripleO three controller and single compute deployment with pacemaker validated using a Heat template based ping test

These same tests are used to validate our trunk repos2.

View article »

Vulnerability Management for OpenStack Newton Cycle

OpenStack engineers met in Austin last week to design the Newton cycle. Here is a report for the Security Project and other associated efforts.

Requirements for the vulnerability:managed governance tag

During Mitaka, the vulnerability management team ("VMT") introduced a new vulnerability:managed tag to help identify supported projects. In essence, this tag indicates that a project's security bug reports are triaged by the VMT taxonomy. For class A reports, the VMT issues an OpenStack Security Advisory ("OSSA").

The VMT documented a set of requirements to get the tag:

  • Deliverable repositories,
  • Dedicated point of contact, usualy a subset of the core team called coresec,
  • PTL agrees to act as a VMT liaison to escalate severe issues,
  • Bug tracker reconfiguration, and
  • Third-party review/audit/threat analysis.

The last point (item 5) intends to ensure that the VMT doesn't become a bottleneck for projects with poor security practices or that have unknown fundamental flaws. By design, the VMT is a small team, composed of 3 volunteers. Thus, new projects must demonstrate that they won't cause an unmanageable number of security bug reports.

In the next section, I will set-out how we can efficiently increase the number of vulnerability:managed projects as designed during the Newton summit etherpad.

What VMT members need to know

Since day 1, doing VMT work for OpenStack has been a challenging task due to:

  • A wide attack surface,
  • Source code and api version changes,
  • Deployment modes with distinct threats, and
  • Coordination between different communities.

Here is the list of questions that need to be answered when managing a new project:

  • What are the input data formats and how are they processed ?
  • Which actions are restricted to operators ?
  • What has been considered a vulnerability in previous bug reports ?
  • What can a malicious user do depends on:
    • Available services, including the non-obvious ones, such as nova-novncproxy,
    • API endpoint paths, including old versions, such as glance v1, and
    • Every other network-facing service to which a user can connect.

A document answering the above questions will likely be required for any vulnerability:managed tag application. Moreover, already supported projects will also need to validate the above requirements in order to keep the tag.

This document needs to be maintained within the OpenStack community using open code review. The security-guide or project documentation are appropriate locations to publish it.

Here is the proposed threat analysis process and Anchor's threat analysis.

Stable Release

Major changes to the OpenStack release process were introduced last cycles:

  • Integrated releases, such as 2015.1.2 are now gone,
  • Projects may use different release models,
  • Unified versions numbers semver, and
  • Automated publication to website.

The "stable summit" set new goals for the Newton cycle:

  • Increase stable release support to 18 months with an option for 6 additional months,
  • Backwards compatibility for libraries, and
  • Co-installability requirements.

End of life policy is the most important topic for the VMT since it increases the number of supported branchs. Infra and QA are also directly affected since they have to maintain the CI capacity for supported stable releases. While in theory it's possible to extend support, here is a list of practical blockers to be addressed:

  • Stable branch gates break too often, see openstack-health,
  • Backport to old releases is time consuming when the affected code went through several refactors, and
  • Long Term Stable (LTS) versions aren't an option because upgrade process must deploy each versions.

In summary, Kilo release end of life won't be extended and the branch will likely be closed soon, EOL: 2016-05-02. Further, external dependencies requirements are not supported by the VMT, but it is worth noting that a new Requirement Team will be created to manage this increasingly difficult situation.

More details in those two etherpads: eol-policy and stable summit

Issue tracker

Last but not least, there was a design session to discuss issue tracker for OpenStack projects. This is a critical topic for vulnerability management since most VMT work actually happens on the issue tracker.

OpenStack projects currently use launchpad.net to track issues and write blueprints for feature tracking. The workflow being sub-optimal, there is an on-going effort to replace launchpad by something better. Migration issues aside, the real concern is that we may end up with another sub-optimal solution which doesn't justify the migration cost.

On the other hand, the OpenStack community initiated storyboard which is designed to address the exceptional needs of OpenStack development. Unfortunately, the project stalled due to the lack of reviewers.

The OpenStack community has three main options regarding issue tracking:

  • Develop its own issue tracker (e.g. storyboard),
  • Operate a ready-to-use solution (e.g. maniphest), or
  • Use an external service (e.g. launchpad).

This topic will be discussed in the following weeks. Hopefully it will lead to a long term decision from the TC with Infra approval.

Conclusion

Once again, we have made great progress and I'm looking forward to further developments. Thank you all for the great OpenStack Newton Design Summit!

Tristan Cacqueray

OpenStack Vulnerability Management Team

View article »

RDO BoF at OpenStack Summit, Austin

On Tuesday afternoon about 60 people gathered for the RDO BoF (Birds of a Feather) session at OpenStack Summit in Austin.

By the way, the term Birds of a Feather comes from the saying "Birds of a feather flock together", which refers to like-minded people gathering.

The topics discussed can be found in the agenda.

The network was friendly, and we managed to capture the whole session in a Google Hangout. The video of the session is HERE.

And there are several pictures from the event on the RDO Google+ page.

Thank you for all who attended. You'll see follow-up discussion on a number of the topics discussed on rdo-list over the coming weeks.

View article »

What did you do in Mitaka? Javier Peña

We're continuing our series of "What did you do in Mitaka?" interviews. This one is with Javier Peña.

(If the above player doesn't work for you, you can download the file HERE.

Rich: Today I'm speaking with Javier Peña, who is another member of the Red Hat OpenStack engineering team. Thanks for making time to speak with me.

Javier: Thank you very much for inviting me, Rich.

R: What did you work on in the Mitaka cycle?

J: I've been focusing on three main topics. First one was keeping the DLRN infrastructure up and running. As you know, this is the service we use to create RPM packages for our RDO distribution, straight from the upstream master commits.

It's been evolving quite a lot during this cycle. We ended up with so much success that we're having infrastructure issues. One of the topics for the next cycle will be to improve the infrastructure, but that's something we'll talk about later.

These packages have now been consumed by the TripleO, Kolla, and Puppet OpenStack CI, so we're quite well tested. Not only are we testing them directly from the RDO trunk repositories, but we have external trials as well.

On top of that I have also been working on packaging, just like some other colleagues on the team who've you've already had a chance to talk to - Haikel, Chandan - I have been contributing packages both to upstream Fedora, and also RDO directly.

And finally, I'm also one of the core maintainers of Packstack. In this cycle we've been adding support for some services such as AODH and Gnocchi. Also we switched Mysql support from the Python side. We switched libraries, and we had to do some work with the upstream Puppet community to make sure that PyMysql, which is now the default Python library used upstream, is also used inside the Puppet core and we can use it in Packstack.

R: You mentioned briefly infrastructure changes that you'll be making in the upcoming cycle. Can you tell us more about what you have planned for Newton?

J: Right now the DLRN infrastructure is building 6 different lines of repositories. We have CentOS master, which is now Newton. Mitaka, Liberty, and Kilo. And we have two branches for Fedora as well. So this is quite a heavy load for VMs that we're running right now. We were having some issues in the cloud we were running that instance. So what we're doing now is we are migrating to the CentOS CI infrastructure. We are having a much bigger machine in there. And also what we are going to do is we will be publishing the resulting repositories using the CentOS CDN, which is way more reliable than what we could build with individual VMs.

R: Thank you again for your time. And we look forward to seeing what comes in Newton.

J: Thank you very much, Rich.

View article »

What did you do in MItaka? Haïkel Guémar

In this installment of the "What Did You Do in Mitaka" series, I'm speaking with Haïkel Guémar

(If the above player doesn't work for you, you can download the audio HERE.)

Rich: Thanks for making time to speak with me.

Haïkel: Thank you.

R: So, tell me, what did you do in Mitaka?

H: I work on the RDO engineering team - the team that is responsible for the stewardship of RDO. For this cycle, I've been focusing on our new packaging work flow.

We were using, for a long time, a piece of infrastructure taken from Fedora, CentOS, and GitHub. This didn't work very well, and was not providing a consistent experience for the contributors. So we've been working with another team at Red Hat to provide a more integrated platform, and one that mirrors the one that we have upstream, based on the same tools - meaning Git, Gerrit, Jenkins, Nodepool, Zuul - that is called Software Factory. So we've been working with the Software Factory team to provide a specialization of that platform called RPMFactory.

RPMFactory is a platform specialized for producing RPMs in a continuous delivery fashion. It has all the advantages of the old tooling we have been using, but with more consistency. You don't have to look in different places to find the source code, the packaging, and stuff like that. Everything is under a single portal.

That's what I've been focusing on during this cycle, on top of my usual duties, which is producing packages and fixing it.

R: And looking forward to the Newton release, what do you think you're going to be working on in that cycle?

H: While we've been working in the new workflow, we've been setting a new goal, that is to decrease the gap between upstream releases and RDO releases down to 2 weeks. Well, we did it on the very same day for Mitaka! So my goal would be for Newton to do as good as this time, or even better. Why not under 2 hours? Not putting on ourselves more pressure, but to try to release almost at the same time as upstream GA, RDO. And also with the same quality standards.

One of the few things that I was not happy with during the Mitaka cycle was mostly the fact that we didn't manage to release some packages in time, so I'd like to do better. Soon enough I will be asking for people to fill in a wish list on packaging, so that we are able to work on that earlier. And so we could release them on time with GA.

R: Thanks again for taking the time to do this.

H: Thank you Rich, for the series.

View article »

What did you do in Mitaka? Chandan Kumar

Next in our series of "What Did You Do In Mitaka" articles, I spoke with Chandan Kumar, who works on packaging for RDO, and also is active on the mailing list and on IRC.

See also, Ivan Chavero and Ihar Hrachyshka's interviews.

(If the above player doesn't work for you, you can download the podcast recording HERE.)

Rich: RDO Mitaka was released a little over a week ago. I'm speaking with Chandan Kumar, who is one of the people who is very active in the RDO community. Thanks for taking time to speak with me.

Could you tell me what you did during the Mitaka cycle?

Chandan: Thank you Rich.

Well, this release, I was packaging and reviewing RDO packages, mostly. In the start of the Mitaka release I was tracking upstream global requirements, and found some of the packages missing from the RDO packages. So I maintained and packaged them.

In the list of packages are: python-wsgi_intercept, python-reno, python-tosca-parser, python-weakrefmethod, python-XStatic-roboto-fontface and many more in which Python reno is most popular because it was used for adding and deleting release notes to upstream OpenStack projects.

In the midcycle, I got a chance to work with Matthew from CERN. And at the same time I had just completed packaging Python Magum client, and helped in packaging Magnum for RDO

I also worked with Javier, Alan, and Haikel, in porting spec files of Oslo libraries and its dependencies to Python 2 and Python 3 packages. At that time there is a need of RDO specific spec templates for client OpenStack test and Oslo libraries. That we have added.

Let's now come to the end of the Mitaka release. We have created Python OpenStack services tests sub -package for all OpenStack packages present in RDO. That was consumed by Puppet OpenStack CI, and that was a great achievement for me.

And lastly, I have done code contributions and documentation changes for DLRN.

That was an exciting release in RDO land.

R: Do you expect to continue participating at the same level in the Newton cycle?

C: Yes. Since the Newton cycle has just started, Magnum trunk DLRN builds got failed because of missing packages Python k8sclient. That I have packaged, and imported, with the help of Alan, do DLRN. Now it's working fine.

I will continue to do packaging and code contributions to rdopkg and DLRN. Apart from that, I will continue to contribute to packstack OpenStack module, and try to add new OpenStack services to RDO ecosystem.

R: Thank you very much for taking time to speak with me.

C: Thanks, Rich.

View article »