21 years in, the landscape around open source evolved a lot. In
part 1 and
part 2 of this 3-part series,
I explained why today, while open source is more necessary than ever, it
appears to no longer be sufficient. In this part, I'll discuss what we, open
source enthusiasts and advocates, can do about that.
This is not a call to change open source
First, let me clarify what we should not do.
As mentioned in part 2, since
open source was coined in 1998, software companies have evolved ways to
retain control while producing open source software, and in that process
stripped users of some of the traditional benefits associated with F/OSS.
But those companies were still abiding to the terms of the open
source licenses, giving users a clear base set of freedoms and rights.
Over the past year, a number of those companies have decided that they
wanted even more control, in particular control of any revenue associated
with the open source software. They proposed new licenses, removing
established freedoms and rights in order to be able to assert that level of
control. The open source definition defines
those minimal freedoms and rights that any open source software should
have, so the Open Source Initiative (OSI),
as steadfast guardians of that definition, rightfully resisted those attempts.
Those companies quickly switched to attacking OSI's legitimacy, pitching "Open
Source" more as a broad category than a clear set of freedoms and rights.
And they created new licenses, with deceptive naming ("community", "commons",
"public"...) in an effort to blur the lines and retain some of
the open source definition aura for their now-proprietary software.
The solution is not in redefining open source, or claiming it's no longer
relevant. Open source is not a business model, or a constantly evolving way
to produce software. It is a base set of user freedoms and rights expressed
in the license the software is published under. Like all standards, its value
resides in its permanence.
Yes, I'm of the opinion that today, "open source" is not enough.
Yes, we need to go beyond open source. But in order to do that, we need to
base that additional layer on a solid foundation: the
open source definition.
That makes the work of the OSI more important
than ever. Open source used to be attacked from the outside, proprietary
software companies claiming open source software was inferior or dangerous.
Those were clear attacks that were relatively easy to resist: it was mostly
education and advocacy, and ultimately the quality of open source software
could be used to prove our point. Now it's attacked from the inside, by
companies traditionally producing open source software, claiming that it
should change to better fit their business models. We need to go back to
the basics and explain why those rights and freedoms matter, and why
blurring the lines ultimately weakens everyone. We need a strong OSI
to lead that new fight, because it is far from over.
A taxonomy of open source production models
As I argued in previous parts, how open source is built ultimately impacts
the benefits users get. A lot of us know that, and we all came up with our
own vocabulary to describe those various ways open source is produced today.
Even within a given model (say open collaboration between equals on a level
playing field), we use different sets of principles: the OpenStack
Foundation has the 4 Opens
(open source, open development, open design, open community), the Eclipse
Foundation has the Open Source Rules of Engagement (open, transparent,
meritocracy), the Apache Foundation has the Apache Way... We all advocate
for our own variant, focusing on differences rather than what we have in
common: the key benefits those variants all enable.
This abundance of slightly-different vocabulary makes it difficult to rally
around and communicate efficiently. If we have no clear way to differentiate
good all-benefits-included open source from twisted some-benefits-withheld
open source, the confusion (where all open source is considered equal)
benefits the twisted production models. I think it is time for us to
regroup, and converge around a clear, common classification of open source
production models.
We need to classify those models based on which benefits they guarantee
to the users of the produced software. Open-core does not guarantee
availability, single-vendor does not provide sustainability nor does
it allow to efficiently engage and influence the direction of the
software, while open-collaboration gives you all three.
Once we have this classification, we'll need to heavily communicate around
it, with a single voice. As long as we use slightly different terms (or
mean slightly different things when using common terms), we maintain
confusion which ultimately benefits the most restrictive models.
Get together
Beyond that, I think we need to talk more. Open source conferences used to
be all about education and advocacy: what is this weird way of producing
software, and why you should probably be interested in it. Once open source
became ubiquitous, those style of horizontal open source conferences became
less relevant, and were soon replaced by more vertical conferences around a
specific stack or a specific use case.
This is a good evolution: this is what winning looks like. The issue is:
the future of open source is not discussed anymore. We rest on our
laurels, while the world continually evolves and adapts. Some open source
conference islands may still exist, with high-level keynotes still raising
the issues, but those are generally one-way conversations.
To do this important work of converging vocabulary and defining common
standards on how open source is produced, Twitter won't cut it. To
bootstrap the effort we'll need to meet, get around a table and take the
time to discuss specific issues together. Ideally that would be done around
some other event(s) to avoid extra travel.
And we need to do that soon. This work is becoming urgent. "Open source" as
a standard has lots of value because of all the user benefits traditionally
associated with free and open source software. That created an aura that
all open source software still benefits from today. But that aura is
weakening over time, thanks to twisted production models. How much more
single-vendor open source can we afford until "open source" no longer means
you can engage with the community and influence the direction of the
software ?
So here is my call to action, which concludes this series.
In 2019, open source is more important than ever. Open source has not "won",
this is a continuous effort, and we are today at a critical junction.
I think open source advocates and enthusiasts need to get together, defining
clear, standard terminology on how open source software is built, and start
communicate heavily around it with a single voice. And beyond that, we need
to create forums where those questions on the future of open source are
discussed. Because whatever battles you win today, the world does not stop
evolving and adapting.
Obviously I don't have all the answers. And there are lots of interesting
questions. It's just time we have a place to ask those questions and
discuss the answers. If you are interested and want to get involved, feel
free to contact me.
21 years in, the landscape around open source evolved a lot. In
part 1 of this 3-part series,
I explained all of open source benefits and why I think open source is
necessary today. In this part, I'll argue that open source is not enough.
The relative victory of open source
All the benefits detailed in
part 1 really explain why open
source became so popular in the last 15 years. Open source is everywhere today.
It has become the default way to build and publish software. You can find open
source on every server, you can find open source on every phone... Even
Microsoft, the company which basically invented proprietary software, is
heavily adopting open source today, with great success. By all accounts,
open source won.
But... has it, really ?
The server, and by extension the computing, networking, and storage
infrastructure, are unquestionably dominated by open source. But the
growing share of code running operations for this infrastructure software
is almost always kept private. The glue code used to provide users access
to this infrastructure (what is commonly described as "cloud computing")
is more often than not a trade secret. And if you look to the other side,
the desktop (or the user-side applications in general) are still
overwhelmingly driven by proprietary software.
Even contemplating what are generally considered open source success
stories, winning can leave a bitter taste in the mouth. For example, looking
at two key tech successes of the last 10 years, Amazon Web Services and
Android, they both are heavily relying on open source software. They
are arguably a part of this success of open source picture I just
painted. But if you go back to
part 1 and look at all
the user benefits I listed, the users of AWS and Android don’t really
enjoy them all. As an AWS user, you don't have transparency: you can’t
really look under the hood and understand how AWS runs things, or why the
service behaves the way it does. As an Android user, you can’t really
engage with Android upstream, contribute to the creation of the software
and make sure it serves your needs better tomorrow.
So open source won and is ubiquitous... however in most cases, users are
denied some of the key benefits of open source. And looking at what is
called "open source" today, one can find lots of twisted production models.
By "twisted", I mean models where some open source benefits go missing,
like the ability to efficiently engage in the community.
For example, you find single-vendor open source, where the
software is controlled by a single company doing development behind
closed doors. You find open-core open source, where advanced features
are reserved for a proprietary version and the open source software is
used as a trial edition. You find open source code drops, where an
organization just periodically dumps their code to open-wash it with
an open source label. You find fire and forget open source, where people
just publish once on GitHub with no intention of ever maintaining the code.
How did we get here?
Control or community
What made open source so attractive to the software industry was the
promise of the community. An engaged community that would help them write
the software, build a more direct relationship that would transcend classic
vendor links, and help you promote the software. The issue was, those
companies still very much wanted to keep control: of the software, of the
design, of the product roadmap, and of the revenue. And so, in reaction to
the success of open source, the software industry evolved a way to produce
open source software that would allow them to retain control.
But the fact is... you can’t really have control and community. The
exclusive control by a specific party over the code is discouraging other
contributors from participating. The external community is considered as
free labor, and is not on a level playing field compared to contributors
on the inside, who really decide the direction of the software. This is
bound to create frustration. This does not make a sustainable community,
and ultimately does not result in sustainable software.
The open-core model followed by some of those companies creates an
additional layer of community tension. At first glance, keeping a set
of advanced features for a proprietary edition of the software sounds
like a smart business model. But what happens when a contributor proposes
code that would make the "community edition" better ? Or when someone starts
to question why a single party is capitalizing on the work of "the community"?
In the best case, this leads to the death of the community, and in the worst
case this leads to a fork... which makes this model particularly brittle.
By 2019, I think it became clearer to everyone that they have to choose
between keeping control and growing a healthy community. However most
companies chose to retain control, and abandon the idea of true community
contribution. Their goal is to keep reaping the marketing gains of calling
their software open source, of pretending to have all the benefits associated
with the open source label, while applying a control recipe that is much
closer to proprietary software than to the original freedoms and rights
associated with free software and open source.
How open source is built impacts the benefits users get
So the issue with twisted production models like single-vendor or open-core is
that you are missing some benefits, like availability, or sustainability,
or self-service, or the ability to engage and influence the direction of
the software. The software industry adapted to the success of open source:
it adopted open source licenses but little else, stripping users of the
benefits associated with open source while following the letter of the
open source law.
How is that possible?
The issue is that free software and open source both addressed solely the
angle of freedom and rights that users get with the end product, as conveyed
through software licenses. They did not mandate how the software was to
be built. They said nothing about who really controls the creation of
the software. And how open source is built actually has a major impact on
the benefits users get out of the software.
The sad reality is, in this century, most open source projects are actually
closed one way or the other: their core development may be done behind
closed doors, or their governance may be locked down to ensure permanent
control by the main sponsor. Everyone produces open source software, but
projects developed by a truly open community have become rare.
And yet, with truly open communities, we have an open source production
model that guarantees all the benefits of free and open source software.
It has a number of different names. I call it open collaboration:
the model where a community of equals contributes to a commons on a level
playing field, generally under an open governance and sometimes the asset
lock of a neutral non-profit organization. No reserved seats, no elite
group of developers doing design behind closed doors. Contribution is the
only valid currency.
Open collaboration used to be the norm for free and open source software
production. While it is more rare today, the success of recent open
infrastructure communities like OpenStack or
Kubernetes proves that this model is still viable
today at very large scale, and can be business-friendly. This model
guarantees all the open source benefits I listed in
part 1, especially
sustainability (not relying on a single vendor), and the ability
for anyone to engage, influence the direction of the software, and
make sure it addresses their future needs.
Open source is not enough
As much as I may regret it, the software industry is free to release their
closely-developed software under an open source license. They have every
right to call their software "open source", as long as they comply with
the terms of an OSI-approved license.
So if we want to promote good all-benefits-included open source against
twisted some-benefits-withheld open source, F/OSS advocates will need
to regroup, work together, reaffirm the open source definition and
build additional standards on top of it, beyond "open source".
This will be the theme of the last part in this series, to be published
next week. Thank you for reading so far!
21 years in, the landscape around open source evolved a lot. Today, "open
source" is not enough. In my opinion it is necessary, but it is not
sufficient. In this 3-part series I'll detail why, starting with Part 1
-- why open source is necessary today.
What open source is
Free software started in the 80’s by defining a number of freedoms. The
author of free software has to grant users (and future contributors to
the software) those freedoms. To summarize, those freedoms made you free
to study, improve the software, and distribute your improvements to the
public, so that ultimately everyone benefits. That was done in reaction
to the apparition of "proprietary" software in a world that previously
considered software a public good.
When open source was defined in 1998, it focused on a more specific angle:
the rights users of the software get with the software, like access to the
source code, or lack of constraints on usage. This straight focus on user
rights (and less confusing naming) made it much more understandable to
businesses and was key to the success of open source in our industry today.
Despite being more business-friendly, open source was never a "business
model". Open source, like free software before it, is just a set of
freedoms and rights attached to software. Those are conveyed through software
licenses and using copyright law as their enforcement mechanism. Publishing
software under a F/OSS license may be a component of a business model, but
if is the only one, then you have a problem.
Freedoms
The freedoms and rights attached to free and open source software bring
a number of key benefits for users.
The first, and most-often cited of those benefits is cost. Access
to the source code is basically free as in beer. Thanks to the English
language, this created interesting confusion in the mass-market as to
what the "free" in "free software" actually meant. You can totally sell
"free software" -- this is generally done by adding freedoms or bundling
services beyond what F/OSS itself mandates (and not by removing freedoms,
as some recently would like you to think).
If the cost benefit has proven more significant as open source evolved,
it's not because users are less and less willing to pay for software or
computing. It's due to the more and more ubiquitous nature of computing. As
software eats the world,
the traditional software pay-per-seat models are getting less and less
adapted to how users work, and they create extra friction in a world
where everyone competes on speed.
As an engineer, I think that today, cost is a scapegoat benefit. What
matters more to users is actually availability. With open source
software, there is no barrier to trying out the software with all of its
functionality. You don't have to ask anyone for permission (or enter any
contractual relationship) to evaluate the software for future use, to
experiment with it, or just to have fun with it. And once you are ready
to jump in, there is no friction in transitioning from experimentation
to production.
As an executive, I consider sustainability to be an even more
significant benefit. When an organization makes the choice of deploying
software, it does not want to left without maintenance, just because
the vendor decides to drop support for the software you run, or just
because the vendor goes bankrupt. The source code being available for
anyone to modify means you are not relying on a single vendor for
long-term maintenance.
Having a multi-vendor space is also a great way to avoid lock-in. When
your business grows a dependency on software, the cost of switching to
another solution can get very high. You find yourself on the vulnerable
side of maintenance deals. Being able to rely on a market of vendors
providing maintenance and services is a much more sustainable way of
consuming software.
Another key benefit of open source adoption in a corporate setting is
that open source makes it easier to identify and attract talent.
Enterprises can easily identify potential recruits based on the open
record of their contributions to the technology they are interested in.
Conversely, candidates can easily identify with the open source
technologies an organization is using. They can join a company with
certainty that they will be able to capitalize on the software experience
they will grow there.
A critical benefit on the technical side is transparency. Access to
the source code means that users are able to look under the hood and
understand by themselves how the software works, or why it behaves
the way it does. Transparency also allows you to efficiently audit
the software for security vulnerabilities. Beyond that, the ability to
take and modify the source code means you have the possibility of
self-service: finding and fixing issues by yourself, without even
depending on a vendor. In both cases that increases your speed in
reacting to unexpected behavior or failures.
Last but not least: with open source you have the possibility to engage
in the community developing the software, and to influence its direction
by contributing directly to it. This is not about "giving back" (although
that is nice). Organizations that engage in the open source communities
are more efficient, anticipate changes, or can voice concerns about
decisions that would adversely affect them. They can make sure the
software adapts to future needs by growing the features they will
need tomorrow.
Larger benefits for ecosystems
Beyond those user benefits (directly derived from the freedoms and rights
attached to F/OSS), open source software also has positive effects to
wider ecosystems.
Monopolies are bad for users. Monocultures are vulnerable environments.
Open source software allows challengers to group efforts and collaborate
to build an alternative to the monopoly player. It does not need to beat
or eliminate the proprietary solution -- being successful is enough to
create a balance and result in a healthier ecosystem.
Looking at the big picture, we live on a planet with limited natural goods,
where reducing waste and optimizing productivity is becoming truly
critical. As software gets pervasive and more and more people produce it,
the open source production model reduces duplication of effort and the waste
of energy of having the same solutions developed in multiple parallel
proprietary silos.
Finally, I personally think a big part of today's social issues is the
result of artificially separating our society between producers and
consumers. Too many people are losing the skills necessary to build
things, and are just given subscriptions, black boxes and content to
absorb in a well-oiled consumption machine. Free and open source software
blurs the line between producers and consumers by removing barriers and
making every consumer a potential producer. It is part of the solution
rather than being part of the problem.
All those benefits explain why open source software is so successful today.
Those unique benefits ultimately make a superior product, one that is a smart
choice for users. It is also a balancing force that drives good hygiene to
wider ecosystems, which is why I would go so far as saying it is necessary
in today's world. In part 2
we'll see why today, while being necessary, open source is no longer
sufficient.
Next week, OpenStack contributors will come together in Denver, Colorado at
the Project Teams Gathering
to discuss in-person the work coming up for the
Stein release cycle.
This regular face-to-face meeting time is critical:
it allows us to address issues that are not easily fixed in virtual
communications, like brainstorming solutions, agreeing on implementation
details, or building up personal relationships. Since day 0 in OpenStack
we have had such events, but their shape and form evolved with our community.
A brief history of contributor events
It started with the Austin Design Summit in July 2010, where the basics of
the project were discussed. The second Design Summit in San Antonio at the
end of 2010 introduced a parallel business track, which grew in importance
as more organizations and potential users joined the fray. The contributors
gathering slowly became a subevent happening at the same time as the main
"Summit". By 2015, summits were 5-days events attracting 6000 people. It made
for a very busy week, and very difficult for contributors to focus on the
necessary discussions with the distractions and commitments of the main event
going on at the same time.
Time was ripe for a change, and that is when we introduced the idea of a
Project Teams Gathering (PTG). The PTG was a separate 5-day event for
contributors to discuss in-person in a calmer, more productive setting.
By the Austin Summit in 2016, it was pretty clear that was the only option
to get productive gatherings again, and the decision was made to roll out
our first PTG in February, 2017 in Atlanta. Attendees loved the small event
feel and their restored productivity. Some said they got more done during
that week than in all old Design Summits (combined), despite some challenges
in navigating the event. We iterated on that formula in Denver and Dublin,
creating tools to make the unstructured and dynamic event agenda more
navigable, by making what is currently happening more discoverable. The
format was extended to include other forms of contributor teams, like SIGs,
workgroups, or Ops meetups. Feedback on the event by the attendees was
extremely good.
The limits of the PTG model
While the feedback at the event was excellent, over the last year it became
pretty clear that holding a separate PTG created a lot of tension. The most
obvious tension was between PTG and Summit. The PTG was designed as an
additional event, not a replacement. In particular, developers were still very
much wanted at the main Summit event, to maintain the technical level of the
event, to reach out to new contributors and users, to discuss with operators
the future of the project at the Forum. But it is hard to justify traveling
internationally 4 times per year to follow a mature project, so a lot of
people ended up choosing one or the other. Smaller teams usually skipped
the PTG, while a lot in larger teams would skip the Summit. That created
community fragmentation between the ones who could attend 4 events per year
and the ones who could not. And those who could not were on the rise: with
the growth in OpenStack adoption in China, the number of contributors, team
leaders and teams where most members are based in China increased
significantly.
Beyond that, the base of contributors to OpenStack is changing: less and less
vendor-driven and more and more user-driven. That is a generally good thing,
but it means that we are slowly moving away from contributors who are 100%
employed to work upstream (and therefore travel as many times a year as
necessary to maximize that productivity) toward contributors that spend a
couple of hours per week to help upstream (for which travel is at a premium).
There are a lot of things OpenStack needs to change to be more friendly to
this type of contributor, and the PTG format was not really helping in this
transition.
Finally, over the last year it became clear that the days of the 5-day-long
5000-people events were gone. Once the initial curiosity and hype-driven
attendance is passed, and people actually start to understand what OpenStack
can be used for (or not used for), you end up with a less overwhelming event,
with a more reasonable number of attendees and days. Most of the 2015-2016
reasons for a separate event are actually no longer applying.
Trying a different trade-off
We ran a number of surveys to evaluate our options -- across Foundation
sponsors, across PTG attendees, across contributors at large. About 60%
of contributors supported co-locating the PTG with the Summit. Even only
considering past PTG attendees, 53% still support co-location. 85% of the
22 top contributing organizations also supported co-location, although some
of the largest ones would prefer to keep it separate. Overall, it felt like
enough of the environment changed that even for those who had benefited
from the event in the past, the solution we had was no longer necessarily
the optimal choice.
In Dublin, then in Vancouver, options were discussed with the Board, the
Technical Committee and the User Committee, and the
decision was made
to relocate the Project Teams Gathering with the Summits in 2019.
The current plan is to run the first Summit in 2019 from Monday to Wednesday,
then a 3-day PTG from Thursday to Saturday. The Forum would still happen during
the Summit days, so the more strategic discussions that happened at the PTG
could move there.
Obviously, some of the gains of holding the PTG as a separate event will be
lost. In particular, a separate event allowed to have strategic discussions
(the Forum at the Summit) at a separate time in the development cycle as the
more tactical discussions (the PTG). Some of the frustration of discussing
both in the same week, when it's a bit late to influence the cycle focus,
will be restored. The Summit will happen close to releases again, without
giving that much time to vendors to build products on it, or deployers to
try it, reducing the quality of the feedback we get at the Forum.
That said, the co-location will strive to keep as much as we can of what
made the PTG unique and productive. In order to preserve the distinct
productive feel of the event, the PTG will be organized as a completely
separate event, with its own registration and branding. It will keep its
unstructured content and dynamic schedule tools. In order to prevent the
activities of the Summit from distracting PTG attendees with outside
commitments, the co-located PTG will happen on entirely separate days,
once the Summit is over. There are only so many days in a week though,
so the trade-off here is to end the PTG on the Saturday.
This change is likely to anger or please you depending on where you stand.
It is important to realize that there is no perfect solution here. Any
solution we choose will be a trade-off between a large number of variables:
including more contributors, maximizing attendee productivity, getting a
critical mass of people present to our events, contain travel cost...
We just hope that this new trade-off will strike a better balance for
OpenStack in 2019, and the Foundation will continue adapt its event strategy
to changing conditions in the future.
Any group of humans needs some form of governance. It’s a set of rules
the group follows in order to address issues and take clear decisions.
Even the absence the rules (anarchy) is a form of governance! At the
opposite end of the spectrum is dictatorship, where all decisions are
made by one person. Open source projects are groups of humans, and they
are no exception to this. They can opt for various governance models,
which I detailed in a
previous article
four years ago (how time flies!).
That article compared
various overall models in terms of which one would best ensure the long-term
survival of the community, avoiding revolutions (or forks). It advocated for
a representative democracy model, and since then I've been asked several
times for the best recipe to implement it. However there are numerous
trade-offs in the exercise of building governance, and the "best" depends
a lot on the specifics of each project situation. So, rather than detail a
perfect one-size-fits-all governance recipe, in this article I'll propose a
framework of three basic rules to keep in mind when implementing it.
This simple 3-rule model can be used to create just enough governance,
a lightweight model that should be sustainable over the long run, while
avoiding extra layers of useless bureaucracy.
Rule #1: Contributor-driven bodies
Governance bodies for an open source project should be selected by the
contributors to the project. I'm not talking of governance bodies for
open source Foundations (which generally benefit from having some
representation of their corporate sponsors chiming in on how their
money shall be spent). I'm talking about the upstream open source project
itself, and how the technical choices end up being made in a community of
contributors.
This rule is critical: it ensures that the people contributing code,
documentation, usage experience, mentoring time or any other form of
contribution to the project are aligned with the leadership of the
project. When this rule is not met, generally the leadership and the
contributors gradually drift apart, to the point where the contributors
no longer feel like their leadership represents them. This situation
generally ends with contributors making the disruptive decision to fork
the project under a new, contributor-aligned governance, generally leaving
the old governance body with a trademark and an empty shell to govern.
One corollary of that first rule is that the governance system must
regularly allow replacement of current leaders. Nobody should be appointed
for life, and the contributors should regularly be consulted, especially
in fast-moving communities.
Rule #2: Aligned with their constituencies
This is another corollary of the first rule. In larger projects, you need
enough governance bodies to ensure that each is aligned with its own
constituency. In particular, if your community is made of disjoint groups
with little to no overlap in membership, and those groups each need decisions
to be made, they probably need to each have their own governance body at that
level.
The risk we are trying to avoid here is dominance of the larger group over
smaller groups. If you use a single governance body for two (or more)
disjoint groups, chances are that the larger group will dominate the
representative governance body, and therefore will end up making decisions
for the smaller group. This is generally OK for global decisions that affect
every contributor equally, but matters that are solely relevant to the
smaller group should be decided at the smaller group level, otherwise that
group might be tempted to fork to regain final call authority over their
own things.
Rule #3: Only where decisions are needed
Strict application of rule #2 tends to result in the creation of a large
number of governance bodies, that's why you need to balance it with rule #3:
only create governance bodies where decisions are actually needed. The art
of lightweight governance is, of course, to find the best balance between
rule #2 and rule #3.
This rule has two practical consequences. The first one is obvious: you should
not create vanity governance bodies, just to give people or organizations a
cool title or badge. Numerous communities fall in the trap of creating
"advisory" boards with appointed seats, to thank long-standing community
members, or give organizations the illusion of control. Those bodies create
extra bureaucracy while not being able to make a single call, or worse,
trying desperately to assert authority to justify their existence.
The second consequence is, before creating a governance body at a certain
level in the project organization, you should question whether decisions
are really needed at that level. If the group needs no final call, or can
trust an upper decision body to make the call if need be, maybe that
governance body is not needed. If two governance bodies need to cooperate
to ensure things work well between them, do you really need to create a
governance body above them, or just encourage discussion and collaboration ?
This trade-off is more subtle, but generally boils down to how badly you
need final decisions to be made, vs. letting independently-made decisions
live alongside.
That is all there is to it! As I said in the introduction, those three
rules are not really a magic recipe, but more of a basic framework to
help you, in the specific situation of your community, build healthy
communities with just enough governance. Let me know if you find it useful!