A framework for lightweight open source governance

Any group of humans needs some form of governance. It’s a set of rules the group follows in order to address issues and take clear decisions. Even the absence the rules (anarchy) is a form of governance! At the opposite end of the spectrum is dictatorship, where all decisions are made by one person. Open source projects are groups of humans, and they are no exception to this. They can opt for various governance models, which I detailed in a previous article four years ago (how time flies!).

That article compared various overall models in terms of which one would best ensure the long-term survival of the community, avoiding revolutions (or forks). It advocated for a representative democracy model, and since then I've been asked several times for the best recipe to implement it. However there are numerous trade-offs in the exercise of building governance, and the "best" depends a lot on the specifics of each project situation. So, rather than detail a perfect one-size-fits-all governance recipe, in this article I'll propose a framework of three basic rules to keep in mind when implementing it.

This simple 3-rule model can be used to create just enough governance, a lightweight model that should be sustainable over the long run, while avoiding extra layers of useless bureaucracy.

Rule #1: Contributor-driven bodies

Governance bodies for an open source project should be selected by the contributors to the project. I'm not talking of governance bodies for open source Foundations (which generally benefit from having some representation of their corporate sponsors chiming in on how their money shall be spent). I'm talking about the upstream open source project itself, and how the technical choices end up being made in a community of contributors.

This rule is critical: it ensures that the people contributing code, documentation, usage experience, mentoring time or any other form of contribution to the project are aligned with the leadership of the project. When this rule is not met, generally the leadership and the contributors gradually drift apart, to the point where the contributors no longer feel like their leadership represents them. This situation generally ends with contributors making the disruptive decision to fork the project under a new, contributor-aligned governance, generally leaving the old governance body with a trademark and an empty shell to govern.

One corollary of that first rule is that the governance system must regularly allow replacement of current leaders. Nobody should be appointed for life, and the contributors should regularly be consulted, especially in fast-moving communities.

Rule #2: Aligned with their constituencies

This is another corollary of the first rule. In larger projects, you need enough governance bodies to ensure that each is aligned with its own constituency. In particular, if your community is made of disjoint groups with little to no overlap in membership, and those groups each need decisions to be made, they probably need to each have their own governance body at that level.

The risk we are trying to avoid here is dominance of the larger group over smaller groups. If you use a single governance body for two (or more) disjoint groups, chances are that the larger group will dominate the representative governance body, and therefore will end up making decisions for the smaller group. This is generally OK for global decisions that affect every contributor equally, but matters that are solely relevant to the smaller group should be decided at the smaller group level, otherwise that group might be tempted to fork to regain final call authority over their own things.

Rule #3: Only where decisions are needed

Strict application of rule #2 tends to result in the creation of a large number of governance bodies, that's why you need to balance it with rule #3: only create governance bodies where decisions are actually needed. The art of lightweight governance is, of course, to find the best balance between rule #2 and rule #3.

This rule has two practical consequences. The first one is obvious: you should not create vanity governance bodies, just to give people or organizations a cool title or badge. Numerous communities fall in the trap of creating "advisory" boards with appointed seats, to thank long-standing community members, or give organizations the illusion of control. Those bodies create extra bureaucracy while not being able to make a single call, or worse, trying desperately to assert authority to justify their existence.

The second consequence is, before creating a governance body at a certain level in the project organization, you should question whether decisions are really needed at that level. If the group needs no final call, or can trust an upper decision body to make the call if need be, maybe that governance body is not needed. If two governance bodies need to cooperate to ensure things work well between them, do you really need to create a governance body above them, or just encourage discussion and collaboration ? This trade-off is more subtle, but generally boils down to how badly you need final decisions to be made, vs. letting independently-made decisions live alongside.

That is all there is to it! As I said in the introduction, those three rules are not really a magic recipe, but more of a basic framework to help you, in the specific situation of your community, build healthy communities with just enough governance. Let me know if you find it useful!

The OpenStack map

In the ancient times (circa 2012), as OpenStack started to grow significantly, Ken Pepple created a diagram to represent the various OpenStack components and how information flowed between them. This diagram took a life of its own, being included in one version or another in every presentation to show in one spaghetti picture the complexity of OpenStack.

As we kept adding new (more or less optional) components to the mix, we stopped trying to represent everything in a single diagram, especially as the Technical Committee refused to special-case some components over others. That left us with a confusing list of 60+ project teams ranging from Nova to Winstackers, and no way to represent clearly "OpenStack".

This situation was identified as a key issue by the Board of Directors, the Technical Committee, the User Committee and the Foundation staff during a stategic workshop held last year in Boston. As a result, a group formed to define how to better communicate what OpenStack is, and a subgroup worked more specifically on a new map to represent OpenStack. Here is the result:

The OpenStack map v.20180501

A number of things you should notice. First, the map is regularly updated. This is the latest version, from May, 2018. The map is also versioned, using a date-based number. So if someone copies it for their presentation and it gets cargo-culted into generations of presentations from there on, it should be pretty apparent that this may not be the latest available version.

Cartographers know that map design is more about what you leave out than about what you represent. This map is very opinionated in that respect. It is designed to be relevant to consumers of OpenStack technology. So it only represents first-order deliverables, things that someone may opt to install or use. That's the reason why it shows Nova, but not Oslo libraries: it does not represent second-order deliverables that first-order deliverables depend on. It also ignores plug-ins or drivers that run on a main deliverable (like Storlets running onto Swift, Dragonflow running onto Neutron, or magnum-ui running onto Horizon).

The remaining components are laid out in named "buckets", based on who the consumer is and what question they answer. There is the main OpenStack bucket, which contains components that provide a user-facing API, that you may deploy to extend the capabilities of your cloud deployment. On the right, the OpenStack-operations bucket contains add-on components that facilitate operating an OpenStack cloud. On the bottom, the OpenStack-lifecyclemanagement bucket shows the various solutions you can use to facilitate installation and lifecycle management of an OpenStack cloud. On the left, the OpenStack-user bucket contains tools that end users of OpenStack clouds can install to help interact with a running OpenStack cloud. And finally, the OpenStack-adjacentenablers bucket contains tools that help other technology stacks (Kubernetes, NFV...) make use of OpenStack services.

Inside each bucket, deliverables are approximately categorized based on what service they deliver. In addition to that, the main OpenStack bucket is organized in a semi-logical manner (base services at the bottom, higher-level services at the top). An opinionated set of "core functionality" is marked in bold to attract the attention of the casual observer to the most-consumed components.

There are lots of different ways to slice this cake, and a lot of things do not perfectly fit in the simplistic view that the map presents. The result is obviously very opinionated, so it cannot please everyone. That's why it's produced by the Foundation staff, with input from the Technical Committee, the User Committee and the Board of Directors. That doesn't mean its design cannot change, or be fixed over time to better represent the reality.

Working on this exercise really helped me visualize "OpenStack" as a product. You can see the main product (the OpenStack bucket), separate from operational add-ons, deployment tools, client tools and technology bridges. You can see things that do not fit well in the map, or stay at the edges of the map, that we could consider cutting out if they are not successful.

We hope that this map helps people to visually represent OpenStack and can replace the infamous spaghetti diagram in future slidedecks. The next step is to communicate that map more widely, and leverage it more heavily on web properties like the Project Navigator. You can always find the most recent version of the map at www.openstack.org/openstack-map.

OpenStack Spectre/Meltdown FAQ

What are Meltdown and Spectre ?

Meltdown and Spectre are the brand names of a series of vulnerabilities discovered by various security researchers around performance optimization techniques built in modern CPUs. Those optimizations (involving superscalar capabilities, out-of-order execution, and speculative branch prediction) fundamentally create a side-channel that can be exploited to deduce the content of computer memory that should normally not be accessible.

Why is it big news ?

It's big news because rather than affecting a specific operating system, it affects most modern CPUs, in ways that cannot be completely fixed (as you can't physically extract the flawed functionality out of your CPUs). The real solution is in a new generation of CPU optimizations that will not exhibit the same flaws while reaching the same levels of performance. This is unlikely to come soon, which means we'll have to deal with workarounds and mitigation patches for a long time.

Why is it business as usual ?

As Bruce Schneier says, "you can't secure what you don't understand". As we build more complex systems (in CPUs, in software, in policies), it is more difficult to build them securely, and they can fail in more subtle ways. There will always be new vulnerabilities and new classes of attacks found, and the answer is always the same: designing defense in depth, keeping track of vulnerabilities found, and swiftly applying patches. This episode might be big news, but the remediation is still applying well-known techniques and processes.

Are those 2 or 3 different vulnerabilities ?

It is actually three different exploitation techniques of the same famility of vulnerabilities, which need to be protected against separately.

  • CVE-2017-5753 (“bounds check bypass”, or variant 1) is one of the two Spectre variants. It affects specific sequences within compiled applications, which must be addressed on a per-binary basis. Applications that can be made to execute untrusted code (e.g. operating system kernels or web browsers) will need updates as more of those exploitable sequences are found.

  • CVE-2017-5715 (“branch target injection”, or variant 2) is the other Spectre variant. It more generally works by poisoning the CPU branch prediction cache to induce privileged applications to leak small bits of information. This can be fixed by a CPU microcode update or by applying advanced software mitigation techniques (like Google's Retpoline) to the vulnerable binaries.

  • CVE-2017-5754 (“rogue data cache load”, or variant 3) is also called Meltdown. This technique lets any unprivileged process read kernel memory (and therefore access information and secrets in other processes running on the same system). It is the easiest to exploit, and requires patching the operating system to reinforce isolation of memory page tables at the kernel level.

What is the impact of those vulnerabilities for OpenStack cloud users ?

Infrastructure as a service harnesses virtualization and containerization technologies to present a set of physical, bare-metal resources as virtual computing resources. It heavily relies on the host kernel security features to properly isolate untrusted workloads, especially the various virtual machines running on the same physical host. When those fail (like is the case here), you can have a hypervisor break. An attacker in a hostile VM running on an unpatched host kernel could use those techniques to access data in other VMs running on the same host.

Additionally, if the guest operating system of your VMs is not patched (or you run a vulnerable application) and run untrusted code on that VM (or in that application), that code could leverage those vulnerabilities to access information in memory in other processes on the same VM.

What should I do as an OpenStack cloud provider ?

Cloud providers should apply kernel patches (from their Linux distribution), hypervisor software updates (from the distribution or their vendor) and CPU microcode updates (from their hardware vendor) that workaround or mitigate those vulnerabilities as soon as they are made available, in order to protect their users.

What should I do as an OpenStack cloud user ?

Cloud users should watch for and apply operating system patches for their guest VMs as soon as they are made available. This advice actually applies to any computer (virtual or physical) you happen to use (including your phone).

Are patches available already ?

Some patches are out, some are still due. Kernel patches mitigating the Meltdown attack are available upstream, but they are significant patches with lots of side-effects, and some OS vendors are still testing them. The coordinated disclosure process failed to keep the secret up to the publication date, which explains why some OS vendors or distributions were not ready when the news dropped.

It is also important to note that this is likely to trigger a long series of patches, as the workarounds and mitigation patches are refined to reduce side-effects and new bugs that those complex patches themselves create. The best recommendation is to keep an eye on your OS vendor patches (and CPU vendor microcode updates) for the coming months and apply all patches quickly.

Is there a performance hit in applying those patches ?

The workarounds and mitigation techniques are still being developed, so it is a little early to say, and it will always depend on the exact workload. However, since the basic flaw here lies in performance optimization techniques in CPUs, most workarounds and mitigation patches should add extra checks, steps and synchronization that will undo some of that performance optimization, resulting in a performance hit.

Is there anything that should be patched on the OpenStack side ?

While OpenStack itself is not directly affected, it is likely that some of the patches that are and will be developed to mitigate those issues will require optimizations in software code to limit the performance penalty. Keep an eye on our stable branches and/or your OpenStack vendor patches to make sure you catch any of those.

Those vulnerabilities also shine some light on the power of side-channel attacks, which shared systems are traditionally more vulnerable to. Security research is likely to focus on such class of issues in the near future, potentially discovering side-channel security attacks in OpenStack that will need to be fixed.

Where can I learn more ?

You can find lots of explanations over the Internet. To understand the basic flaw and the CPU technologies involved, I recommend reading Eben Upton's great post. If that's too deep or you need a good analogy to tell your less-technical friends, I find this one by Robert Merkel not too bad.

For technical details on the vulnerability themselves, Jann Horn's post on Google Project Zero blog should be first on your list. You can also read the Spectre and Meltdown papers.

For more information on the various mitigation techniques, I recommend starting with this article from Google's Security blog. For information about Linux kernel patches in particular, I recommend Greg Kroah-Hartman's post.

What to expect from the Queens PTG

In less than two weeks, OpenStack upstream developers and project team members will assemble in Denver, Colorado for a week of team discussions, kickstarting the Queens development cycle.

Attending the PTG is a great way to make upstream developers more efficient and productive: participating in the new development cycle organization, solving early blockers and long-standing issues in-person, and building personal relationships to ease interactions afterwards.

What changed since Atlanta ?

The main piece of feedback we received from the Pike PTG in Atlanta was that with the ad-hoc discussions and dynamic scheduling, it was hard to discover what was being discussed in every room. This was especially an issue during the first two days, where lots of vertical team members were around but did not know which room to go to.

In order to address that issue while keeping the scheduling flexibility that makes this event so productive, we created an IRC-driven dynamic notification system. Each room moderator is able to signal what is being discussed right now, and what will be discussed next in the #openstack-ptg IRC channel. That input is then collected into a mobile-friendly webpage for easy access. That page also shows sessions scheduled in the reservable extra rooms via Ethercalc, so it's a one-stop view of what's being currently discussed in every room, and what you could be interested in joining next.

The other piece of feedback that we received in Atlanta was that the horizontal/vertical week slicing was suboptimal. Having all horizontal teams (QA, Infra, Docs) meet on Monday-Tuesday and all vertical teams (Nova, Cinder, Swift) meet on Wednesday-Friday was a bit arbitrary and did not make an optimal use of the time available.

For Denver we still split the week in two, but with a slightly different pattern. On Monday-Tuesday we'll have inter-team discussions, with rooms centered more on topics than on teams, focused on solving problems. On Wednesday-Friday we'll have intra-team discussions, focused on organizing, prioritizing and bootstrapping the work for the rest of the deployment cycle. Such a week split won't magically suppress all conflicts obviously, but we hope it will improve the overall attendee experience.

What rooms/topics will we have on Monday-Tuesday ?

Compute stack / VM & BM WG (#compute): In this room, we’ll have discussions to solve inter-project issues within the base compute stack (Keystone, Cinder, Neutron, Nova, Glance, Ironic…).

API SIG (#api): In this room, we’ll discuss API guidelines to further improve the coherence and compatibility of the APIs we present to the cloud user. Members of the SIG will also be hosting guided reviews of potential API changes, see the openstack-dev mailing list for more details.

Infra / QA / RelMgt / Stable / Requirements helproom (#infra): Join this room if you have any questions about or need help with anything related to the development infrastructure, in a large sense. Can be questions around project infrastructure configuration, test jobs (including taking advantage of the new Zuul v3 features), the “Split Tempest plugins” Queens goal, release management, stable branches, global requirements.

Packaging WG (#packaging): In this room, we’ll discuss convergence and commonality across the various ways to deploy OpenStack: Kolla, TripleO, OpenStackAnsible, Puppet-OpenStack, OpenStack Chef, Charms...

Technical Committee / Stewardship WG (#tc): In this room, we’ll discuss project governance issues in general, and stewardship challenges in particular.

Skip-level upgrading (#upgrading):

Support for skip-level upgrading across all OpenStack components will be discussed In this room. We’ll also discuss increasing the number of projects that support rolling upgrades, zero-downtime upgrades and zero-impact upgrades.

GUI helproom / Horizon (#horizon): Join this room if you have questions or need help writing a Horizon dashboard for your project, and want to learn about the latest Horizon features. Horizon team members will also discuss Queens cycle improvements here.

Oslo common libraries (#oslo): Current and potential future Oslo libraries will be discussed in this room. Come to discuss pain points or missing features, or to learn about libraries you should probably be using.

Docs / I18n helproom (#docs-i18n): Documentation has gone through a major transition at the end of Pike, with more doc maintenance work in the hands of each project team. The Docs and I18n teams will meet in this room and be available to mentor and give guidance to Doc owners in every team.

Simplification (#simplification): Complexity is often cited as the #1 issue in OpenStack. It is however possible to reduce overall complexity, by removing unused features, or deleting useless configuration options. If you’re generally interested in making OpenStack simpler, join this room!

Make components reusable for adjacent techs (#reusability): We see more and more OpenStack components being reused in open infrastructure stacks built around adjacent technology. In this room we’ll tackle how to improve this component reusability, as well as look into things in adjacent communities we could take advantage of.

CLI / SDK helproom / OpenStackClient (#cli): In this helproom we’ll look at streamlining our client-side face. Expect discussions around OpenStackClient, Shade and other SDKs.

"Policy in code" goal helproom (#policy-in-code): For the Queens cycle we selected “Policy in code” as a cross-project release goal. Some teams will need help and guidance to complete that goal: this room is available to help you explain and make progress on it.

Interoperability / Interop WG / Refstack (#interop): Interoperability between clouds is a key distinguishing feature of OpenStack clouds. The Interop WG will lead discussions around that critical aspect in this room.

User Committee / Product WG (#uc): The User Committee and its associated subteams and workgroups will be present at the PTG too, with a goal all week to close the feedback loop from operators back to developers. This work will be prepared in this room on the first two days of the event.

Security (#security): Security is a process which requires continuous attention. Security-minded folks will gather into this room to further advance key security functionality across all OpenStack components.

Which teams are going to meet on Wednesday-Friday ?

The following project teams will meet for all three days: Nova, Neutron, Cinder, TripleO, Ironic, Kolla, Swift, Keystone, OpenStack-Ansible, Infrastructure, QA, Octavia, and Glance.

The following project teams plan to only meet for two days, Wednesday-Thursday: Heat, Watcher, OpenStack Charms, Trove, Congress, Barbican, Mistral, Freezer, Sahara, Glare, and Puppet OpenStack.

Join us!

We already have more than 360 people signed up, but we still have room for you! Join us if you can. The ticket price will increase this Friday though, so if you plan to register I'd advise you to do so ASAP to avoid the last-minute price hike.

The event hotel is pretty full at this point (with the last rooms available priced accordingly), but there are lots of other options nearby.

See you there!

Introducing OpenStack SIGs

Back in March in Boston, the OpenStack Board of Directors, Technical Committee, User Committee and Foundation staff members met for a strategic workshop. The goal of the workshop was to come up with a list of key issues needing attention from OpenStack leadership. One of the strategic areas that emerged from that workshop is the need to improve the feedback loop between users and developers of the software. Melvin Hillsman volunteered to lead that area.

Why SIGs ?

OpenStack was quite successful in raising an organized, vocal, and engaged user community. However the developer and user communities are still mostly acting as separate communities. Improving the feedback loop starts with putting everyone caring about the same problem space in the same rooms and work groups. The Forum (removing the artificial line between the Design Summit and the Ops Summit) was a first step in that direction. SIGs are another step in addressing that problem.

Currently in OpenStack we have various forms of workgroups, all attached to a specific OpenStack governance body: User Committee workgroups (like the Scientific WG or the Large Deployment WG), upstream workgroups (like the API WG or the Deployment WG), or Board workgroups. Some of those are very focused on a specific segment of the community, so it makes sense to attach them to a specific governance body. But most are just a group of humans interested in tackling a specific problem space together, and establishing those groups in a specific community corner sends the wrong message and discourages participation from everyone in the community.

As a result (and despite our efforts to communicate that everyone is welcome), most TC-governed workgroups lack operator participants, and most UC-governed workgroups lack developer participants. It's clearly not because the scope of the group is one-sided (developers are interested in scaling issues, operators are interested in deployment issues). It's because developers assume that a user committee workgroup about "large deployments" is meant to gather operator feedback rather than implementing solutions. It's because operators assume that an upstream-born workgroup about "deployment" is only to explore development commonalities between the various deployment strategies. Or they just fly below the other group's usual radar. SIGs are about breaking the artificial barriers and making it clear(er) that workgroups are for everyone, by disconnecting them from the governance domains and the useless upstream/downstream division.

SIGs in practice

SIGs are neither operator-focused nor developer-focused. They are open groups, with documented guidance on how to get involved. They have a scope, a clear description of the problem space they are working to address, or of the use case they want to better support in OpenStack. Their membership includes affected users that can discuss the pain points and the needs, as well as development resources that can pool their efforts to achieve the groups goals. Ideally everyone in the group straddles the artificial line between operators and developers and identifies as a little of both.

In practice, SIGs are not really different from the various forms of workgroups we already have. You can continue to use the same meetings, git repositories, and group outputs that you used to have. To avoid systematic cross-posting between the openstack-dev and the openstack-operators mailing-lists, SIG discussions can use the new openstack-sigs mailing-list, SIG members can take advantage of our various events (PTG, Ops meetups, Summits) to meet in person.

Next steps

We are only getting started. So far we only have one SIG: the "Meta" SIG, to discuss advancement of the SIG concept. Several existing workgroups have expressed their willingness to become early adopters of the new concept, so we'll have more soon. If your workgroup is interested in being branded as a SIG, let Melvin or myself know, we'll guide you through the process (which at this point only involves being listed on a wiki page). Over time we expect SIGs to become the default: most community-specific workgroups would become cross-community SIGs, and the remaining workgroups would become more like subteams of their associated governance body.

And if you have early comments or ideas on SIGs, please join the Meta discussion on the openstack-sigs mailing-list, (using the [meta] subject prefix)!