ttx:reloadedhttps://ttx.re/2019-07-15T15:52:00+02:00Open source in 2019, Part 3/32019-07-15T15:52:00+02:002019-07-15T15:52:00+02:00Thierry Carreztag:ttx.re,2019-07-15:/open-source-2019-part3.html<p>21 years in, the landscape around open source evolved a lot. In
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> and
<a href="https://ttx.re/open-source-2019-part2.html">part 2</a> of this 3-part series,
I explained why today, while open source is more necessary than ever, it
appears to no longer be sufficient. In this part, I'll discuss what we, open
source enthusiasts …</p><p>21 years in, the landscape around open source evolved a lot. In
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> and
<a href="https://ttx.re/open-source-2019-part2.html">part 2</a> of this 3-part series,
I explained why today, while open source is more necessary than ever, it
appears to no longer be sufficient. In this part, I'll discuss what we, open
source enthusiasts and advocates, can do about that.</p>
<h2>This is not a call to change open source</h2>
<p>First, let me clarify what we should <strong>not</strong> do.</p>
<p>As mentioned in <a href="https://ttx.re/open-source-2019-part2.html">part 2</a>, since
open source was coined in 1998, software companies have evolved ways to
retain control while producing open source software, and in that process
stripped users of some of the traditional benefits associated with F/OSS.
But those companies were still abiding to the terms of the open
source licenses, giving users a clear base set of freedoms and rights.</p>
<p>Over the past year, a number of those companies have decided that they
wanted even <em>more</em> control, in particular control of any revenue associated
with the open source software. They proposed new licenses, removing
established freedoms and rights in order to be able to assert that level of
control. The <a href="https://opensource.org/osd">open source definition</a> defines
those minimal freedoms and rights that any open source software should
have, so the <a href="https://opensource.org/">Open Source Initiative (OSI)</a>,
as steadfast guardians of that definition, rightfully resisted those attempts.</p>
<p>Those companies quickly switched to attacking OSI's legitimacy, pitching "Open
Source" more as a broad category than a clear set of freedoms and rights.
And they created new licenses, with deceptive naming ("community", "commons",
"public"...) in an effort to blur the lines and retain some of
the open source definition aura for their now-proprietary software.</p>
<p>The solution is not in redefining open source, or claiming it's no longer
relevant. Open source is not a business model, or a constantly evolving way
to produce software. It is a base set of user freedoms and rights expressed
in the license the software is published under. Like all standards, its value
resides in its permanence.</p>
<p>Yes, I'm of the opinion that today, "open source" is not enough.
Yes, we need to go beyond open source. But in order to do that, we need to
base that additional layer on a solid foundation: the
<a href="https://opensource.org/osd">open source definition</a>.</p>
<p>That makes the work of the <a href="https://opensource.org/">OSI</a> more important
than ever. Open source used to be attacked from the outside, proprietary
software companies claiming open source software was inferior or dangerous.
Those were clear attacks that were relatively easy to resist: it was mostly
education and advocacy, and ultimately the quality of open source software
could be used to prove our point. Now it's attacked from the inside, by
companies traditionally producing open source software, claiming that it
should change to better fit their business models. We need to go back to
the basics and explain why those rights and freedoms matter, and why
blurring the lines ultimately weakens everyone. We need a strong OSI
to lead that new fight, because it is far from over.</p>
<h2>A taxonomy of open source production models</h2>
<p>As I argued in previous parts, how open source is built ultimately impacts
the benefits users get. A lot of us know that, and we all came up with our
own vocabulary to describe those various ways open source is produced today.</p>
<p>Even within a given model (say open collaboration between equals on a level
playing field), we use different sets of principles: the OpenStack
Foundation has the <a href="https://www.openstack.org/four-opens/">4 Opens</a>
(open source, open development, open design, open community), the Eclipse
Foundation has the Open Source Rules of Engagement (open, transparent,
meritocracy), the Apache Foundation has the Apache Way... We all advocate
for our own variant, focusing on differences rather than what we have in
common: the key benefits those variants all enable.</p>
<p>This abundance of slightly-different vocabulary makes it difficult to rally
around and communicate efficiently. If we have no clear way to differentiate
good all-benefits-included open source from twisted some-benefits-withheld
open source, the confusion (where all open source is considered equal)
benefits the twisted production models. I think it is time for us to
regroup, and converge around <strong>a clear, common classification of open source
production models</strong>.</p>
<p>We need to classify those models based on which benefits they guarantee
to the users of the produced software. Open-core does not guarantee
availability, single-vendor does not provide sustainability nor does
it allow to efficiently engage and influence the direction of the
software, while open-collaboration gives you all three.</p>
<p>Once we have this classification, we'll need to heavily communicate around
it, with a single voice. As long as we use slightly different terms (or
mean slightly different things when using common terms), we maintain
confusion which ultimately benefits the most restrictive models.</p>
<h2>Get together</h2>
<p>Beyond that, I think we need to talk more. Open source conferences used to
be all about education and advocacy: what is this weird way of producing
software, and why you should probably be interested in it. Once open source
became ubiquitous, those style of horizontal open source conferences became
less relevant, and were soon replaced by more vertical conferences around a
specific stack or a specific use case.</p>
<p>This is a good evolution: this is what winning looks like. The issue is:
<strong>the future of open source is not discussed anymore</strong>. We rest on our
laurels, while the world continually evolves and adapts. Some open source
conference islands may still exist, with high-level keynotes still raising
the issues, but those are generally one-way conversations.</p>
<p>To do this important work of converging vocabulary and defining common
standards on how open source is produced, Twitter won't cut it. To
bootstrap the effort we'll need to meet, get around a table and take the
time to discuss specific issues together. Ideally that would be done around
some other event(s) to avoid extra travel.</p>
<p>And we need to do that soon. This work is becoming urgent. "Open source" as
a standard has lots of value because of all the user benefits traditionally
associated with free and open source software. That created an aura that
all open source software still benefits from today. But that aura is
weakening over time, thanks to twisted production models. How much more
single-vendor open source can we afford until "open source" no longer means
you can engage with the community and influence the direction of the
software ?</p>
<p>So here is my call to action, which concludes this series.</p>
<p>In 2019, open source is more important than ever. Open source has not "won",
this is a continuous effort, and we are today at a critical junction.
I think open source advocates and enthusiasts need to get together, defining
clear, standard terminology on how open source software is built, and start
communicate heavily around it with a single voice. And beyond that, we need
to create forums where those questions on the future of open source are
discussed. Because whatever battles you win today, the world does not stop
evolving and adapting.</p>
<p>Obviously I don't have all the answers. And there are lots of interesting
questions. It's just time we have a place to ask those questions and
discuss the answers. If you are interested and want to get involved, feel
free to contact me.</p>Open source in 2019, Part 2/32019-07-08T15:50:00+02:002019-07-08T15:50:00+02:00Thierry Carreztag:ttx.re,2019-07-08:/open-source-2019-part2.html<p>21 years in, the landscape around open source evolved a lot. In
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> of this 3-part series,
I explained all of open source benefits and why I think open source is
necessary today. In this part, I'll argue that <strong>open source is not enough</strong>.</p>
<h2>The relative victory of open …</h2><p>21 years in, the landscape around open source evolved a lot. In
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> of this 3-part series,
I explained all of open source benefits and why I think open source is
necessary today. In this part, I'll argue that <strong>open source is not enough</strong>.</p>
<h2>The relative victory of open source</h2>
<p>All the benefits detailed in
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> really explain why open
source became so popular in the last 15 years. Open source is everywhere today.
It has become the default way to build and publish software. You can find open
source on every server, you can find open source on every phone... Even
Microsoft, the company which basically invented proprietary software, is
heavily adopting open source today, with great success. By all accounts,
open source won.</p>
<p>But... has it, really ?</p>
<p>The server, and by extension the computing, networking, and storage
infrastructure, are unquestionably dominated by open source. But the
growing share of code running operations for this infrastructure software
is almost always kept private. The glue code used to provide users access
to this infrastructure (what is commonly described as "cloud computing")
is more often than not a trade secret. And if you look to the other side,
the desktop (or the user-side applications in general) are still
overwhelmingly driven by proprietary software.</p>
<p>Even contemplating what are generally considered open source success
stories, winning can leave a bitter taste in the mouth. For example, looking
at two key tech successes of the last 10 years, Amazon Web Services and
Android, they both are heavily relying on open source software. They
are arguably a part of this <em>success of open source</em> picture I just
painted. But if you go back to
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a> and look at all
the user benefits I listed, the users of AWS and Android don’t really
enjoy them all. As an AWS user, you don't have <em>transparency</em>: you can’t
really look under the hood and understand how AWS runs things, or why the
service behaves the way it does. As an Android user, you can’t really
engage with Android upstream, contribute to the creation of the software
and make sure it serves your needs better tomorrow.</p>
<p>So open source won and is ubiquitous... however in most cases, users are
denied some of the key benefits of open source. And looking at what is
called "open source" today, one can find lots of twisted production models.
By "twisted", I mean models where some open source benefits go missing,
like the ability to efficiently engage in the community.</p>
<p>For example, you find <em>single-vendor</em> open source, where the
software is controlled by a single company doing development behind
closed doors. You find <em>open-core</em> open source, where advanced features
are reserved for a proprietary version and the open source software is
used as a trial edition. You find open source <em>code drops</em>, where an
organization just periodically dumps their code to open-wash it with
an open source label. You find <em>fire and forget</em> open source, where people
just publish once on GitHub with no intention of ever maintaining the code.
How did we get here?</p>
<h2>Control or community</h2>
<p>What made open source so attractive to the software industry was the
promise of the community. An engaged community that would help them write
the software, build a more direct relationship that would transcend classic
vendor links, and help you promote the software. The issue was, those
companies still very much wanted to keep control: of the software, of the
design, of the product roadmap, and of the revenue. And so, in reaction to
the success of open source, the software industry evolved a way to produce
open source software that would allow them to retain control.</p>
<p>But the fact is... you can’t really have control <strong>and</strong> community. The
exclusive control by a specific party over the code is discouraging other
contributors from participating. The external community is considered as
free labor, and is not on a level playing field compared to contributors
on the inside, who really decide the direction of the software. This is
bound to create frustration. This does not make a sustainable community,
and ultimately does not result in sustainable software.</p>
<p>The <em>open-core</em> model followed by some of those companies creates an
additional layer of community tension. At first glance, keeping a set
of advanced features for a proprietary edition of the software sounds
like a smart business model. But what happens when a contributor proposes
code that would make the "community edition" better ? Or when someone starts
to question why a single party is capitalizing on the work of "the community"?
In the best case, this leads to the death of the community, and in the worst
case this leads to a fork... which makes this model particularly brittle.</p>
<p>By 2019, I think it became clearer to everyone that they have to choose
between keeping control and growing a healthy community. However most
companies chose to retain control, and abandon the idea of true community
contribution. Their goal is to keep reaping the marketing gains of calling
their software open source, of pretending to have all the benefits associated
with the open source label, while applying a control recipe that is much
closer to proprietary software than to the original freedoms and rights
associated with free software and open source.</p>
<h2>How open source is built impacts the benefits users get</h2>
<p>So the issue with twisted production models like single-vendor or open-core is
that you are missing some benefits, like <em>availability</em>, or <em>sustainability</em>,
or <em>self-service</em>, or the ability to engage and influence the direction of
the software. The software industry adapted to the success of open source:
it adopted open source licenses but little else, stripping users of the
benefits associated with open source while following the letter of the
open source law.</p>
<p>How is that possible?</p>
<p>The issue is that free software and open source both addressed solely the
angle of freedom and rights that users get with the end product, as conveyed
through software licenses. They did not mandate <strong>how</strong> the software was to
be built. They said nothing about <strong>who</strong> really controls the creation of
the software. And how open source is built actually has a major impact on
the benefits users get out of the software.</p>
<p>The sad reality is, in this century, most open source projects are actually
closed one way or the other: their core development may be done behind
closed doors, or their governance may be locked down to ensure permanent
control by the main sponsor. Everyone produces open source software, but
projects developed by a truly open community have become rare.</p>
<p>And yet, with truly open communities, we have an open source production
model that guarantees all the benefits of free and open source software.
It has a number of different names. I call it <strong>open collaboration</strong>:
the model where a community of equals contributes to a commons on a level
playing field, generally under an open governance and sometimes the asset
lock of a neutral non-profit organization. No reserved seats, no elite
group of developers doing design behind closed doors. Contribution is the
only valid currency.</p>
<p>Open collaboration used to be the norm for free and open source software
production. While it is more rare today, the success of recent open
infrastructure communities like <a href="https://www.openstack.org">OpenStack</a> or
<a href="https://kubernetes.io">Kubernetes</a> proves that this model is still viable
today at very large scale, and can be business-friendly. This model
guarantees all the open source benefits I listed in
<a href="https://ttx.re/open-source-2019-part1.html">part 1</a>, especially
sustainability (not relying on a single vendor), and the ability
for anyone to engage, influence the direction of the software, and
make sure it addresses their future needs.</p>
<h2>Open source is not enough</h2>
<p>As much as I may regret it, the software industry is free to release their
closely-developed software under an open source license. They have every
right to call their software "open source", as long as they comply with
the terms of an <a href="https://opensource.org/licenses">OSI-approved license</a>.
So if we want to promote good all-benefits-included open source against
twisted some-benefits-withheld open source, F/OSS advocates will need
to regroup, work together, reaffirm the open source definition and
build additional standards on top of it, beyond "open source".</p>
<p>This will be the theme of the last part in this series, to be published
next week. Thank you for reading so far!</p>Open source in 2019, Part 1/32019-07-01T14:50:00+02:002019-07-01T14:50:00+02:00Thierry Carreztag:ttx.re,2019-07-01:/open-source-2019-part1.html<p>21 years in, the landscape around open source evolved a lot. Today, "open
source" is not enough. In my opinion it is necessary, but it is not
sufficient. In this 3-part series I'll detail why, starting with Part 1
-- <strong>why open source is necessary today</strong>.</p>
<h2>What open source is</h2>
<p>Free …</p><p>21 years in, the landscape around open source evolved a lot. Today, "open
source" is not enough. In my opinion it is necessary, but it is not
sufficient. In this 3-part series I'll detail why, starting with Part 1
-- <strong>why open source is necessary today</strong>.</p>
<h2>What open source is</h2>
<p>Free software started in the 80’s by defining a number of freedoms. The
author of free software has to grant users (and future contributors to
the software) those freedoms. To summarize, those freedoms made you free
to study, improve the software, and distribute your improvements to the
public, so that ultimately everyone benefits. That was done in reaction
to the apparition of "proprietary" software in a world that previously
considered software a public good.</p>
<p>When open source was defined in 1998, it focused on a more specific angle:
the rights users of the software get with the software, like access to the
source code, or lack of constraints on usage. This straight focus on user
rights (and less confusing naming) made it much more understandable to
businesses and was key to the success of open source in our industry today.</p>
<p>Despite being more business-friendly, open source was never a "business
model". Open source, like free software before it, is just a set of
freedoms and rights attached to software. Those are conveyed through software
licenses and using copyright law as their enforcement mechanism. Publishing
software under a F/OSS license may be a component of a business model, but
if is the only one, then you have a problem.</p>
<h2>Freedoms</h2>
<p>The freedoms and rights attached to free and open source software bring
a number of key benefits for users.</p>
<p>The first, and most-often cited of those benefits is <strong>cost</strong>. Access
to the source code is basically free as in beer. Thanks to the English
language, this created interesting confusion in the mass-market as to
what the "free" in "free software" actually meant. You can totally sell
"free software" -- this is generally done by adding freedoms or bundling
services beyond what F/OSS itself mandates (and not by <em>removing</em> freedoms,
as some recently would like you to think).</p>
<p>If the cost benefit has proven more significant as open source evolved,
it's not because users are less and less willing to pay for software or
computing. It's due to the more and more ubiquitous nature of computing. As
<a href="https://a16z.com/2011/08/20/why-software-is-eating-the-world/">software eats the world</a>,
the traditional software pay-per-seat models are getting less and less
adapted to how users work, and they create extra friction in a world
where everyone competes on speed.</p>
<p>As an engineer, I think that today, cost is a scapegoat benefit. What
matters more to users is actually <strong>availability</strong>. With open source
software, there is no barrier to trying out the software with all of its
functionality. You don't have to ask anyone for permission (or enter any
contractual relationship) to evaluate the software for future use, to
experiment with it, or just to have fun with it. And once you are ready
to jump in, there is no friction in transitioning from experimentation
to production.</p>
<p>As an executive, I consider <strong>sustainability</strong> to be an even more
significant benefit. When an organization makes the choice of deploying
software, it does not want to left without maintenance, just because
the vendor decides to drop support for the software you run, or just
because the vendor goes bankrupt. The source code being available for
anyone to modify means you are not relying on a single vendor for
long-term maintenance.</p>
<p>Having a multi-vendor space is also a great way to avoid lock-in. When
your business grows a dependency on software, the cost of switching to
another solution can get very high. You find yourself on the vulnerable
side of maintenance deals. Being able to rely on a market of vendors
providing maintenance and services is a much more sustainable way of
consuming software.</p>
<p>Another key benefit of open source adoption in a corporate setting is
that open source makes it easier to <strong>identify and attract talent</strong>.
Enterprises can easily identify potential recruits based on the open
record of their contributions to the technology they are interested in.
Conversely, candidates can easily identify with the open source
technologies an organization is using. They can join a company with
certainty that they will be able to capitalize on the software experience
they will grow there.</p>
<p>A critical benefit on the technical side is <strong>transparency</strong>. Access to
the source code means that users are able to look under the hood and
understand <em>by themselves</em> how the software works, or why it behaves
the way it does. Transparency also allows you to efficiently audit
the software for security vulnerabilities. Beyond that, the ability to
take and modify the source code means you have the possibility of
<strong>self-service</strong>: finding and fixing issues by yourself, without even
depending on a vendor. In both cases that increases your speed in
reacting to unexpected behavior or failures.</p>
<p>Last but not least: with open source you have the possibility to engage
in the community developing the software, and to influence its direction
by contributing directly to it. This is not about "giving back" (although
that is nice). Organizations that engage in the open source communities
are more efficient, anticipate changes, or can voice concerns about
decisions that would adversely affect them. They can make sure the
software <strong>adapts to future needs</strong> by growing the features they will
need tomorrow.</p>
<h2>Larger benefits for ecosystems</h2>
<p>Beyond those user benefits (directly derived from the freedoms and rights
attached to F/OSS), open source software also has positive effects to
wider ecosystems.</p>
<p>Monopolies are bad for users. Monocultures are vulnerable environments.
Open source software allows challengers to group efforts and collaborate
to build an alternative to the monopoly player. It does not need to beat
or eliminate the proprietary solution -- being successful is enough to
create a balance and result in a healthier ecosystem.</p>
<p>Looking at the big picture, we live on a planet with limited natural goods,
where reducing waste and optimizing productivity is becoming truly
critical. As software gets pervasive and more and more people produce it,
the open source production model reduces duplication of effort and the waste
of energy of having the same solutions developed in multiple parallel
proprietary silos.</p>
<p>Finally, I personally think a big part of today's social issues is the
result of artificially separating our society between producers and
consumers. Too many people are losing the skills necessary to build
things, and are just given subscriptions, black boxes and content to
absorb in a well-oiled consumption machine. Free and open source software
blurs the line between producers and consumers by removing barriers and
making every consumer a potential producer. It is part of the solution
rather than being part of the problem.</p>
<p>All those benefits explain why open source software is so successful today.
Those unique benefits ultimately make a superior product, one that is a smart
choice for users. It is also a balancing force that drives good hygiene to
wider ecosystems, which is why I would go so far as saying it is necessary
in today's world. In <a href="https://ttx.re/open-source-2019-part2.html">part 2</a>
we'll see why today, while being necessary, open source is no longer
sufficient.</p>The Future of Project Teams Gatherings2018-09-04T12:12:00+02:002018-09-04T12:12:00+02:00Thierry Carreztag:ttx.re,2018-09-04:/future-of-ptg.html<p>Next week, OpenStack contributors will come together in Denver, Colorado at
the <a href="https://www.openstack.org/ptg">Project Teams Gathering</a>
to discuss in-person the work coming up for the
<a href="https://releases.openstack.org/stein/schedule.html">Stein release cycle</a>.
This regular face-to-face meeting time is critical:
it allows us to address issues that are not easily fixed in virtual
communications, like brainstorming …</p><p>Next week, OpenStack contributors will come together in Denver, Colorado at
the <a href="https://www.openstack.org/ptg">Project Teams Gathering</a>
to discuss in-person the work coming up for the
<a href="https://releases.openstack.org/stein/schedule.html">Stein release cycle</a>.
This regular face-to-face meeting time is critical:
it allows us to address issues that are not easily fixed in virtual
communications, like brainstorming solutions, agreeing on implementation
details, or building up personal relationships. Since day 0 in OpenStack
we have had such events, but their shape and form evolved with our community.</p>
<h3>A brief history of contributor events</h3>
<p>It started with the Austin Design Summit in July 2010, where the basics of
the project were discussed. The second Design Summit in San Antonio at the
end of 2010 introduced a parallel business track, which grew in importance
as more organizations and potential users joined the fray. The contributors
gathering slowly became a subevent happening at the same time as the main
"Summit". By 2015, summits were 5-days events attracting 6000 people. It made
for a very busy week, and very difficult for contributors to focus on the
necessary discussions with the distractions and commitments of the main event
going on at the same time.</p>
<p>Time was ripe for a change, and that is when we introduced the idea of a
Project Teams Gathering (PTG). The PTG was a separate 5-day event for
contributors to discuss in-person in a calmer, more productive setting.
By the Austin Summit in 2016, it was pretty clear that was the only option
to get productive gatherings again, and the decision was made to roll out
our first PTG in February, 2017 in Atlanta. Attendees loved the small event
feel and their restored productivity. Some said they got more done during
that week than in all old Design Summits (combined), despite some challenges
in navigating the event. We iterated on that formula in Denver and Dublin,
creating tools to make the unstructured and dynamic event agenda more
navigable, by making what is currently happening more discoverable. The
format was extended to include other forms of contributor teams, like SIGs,
workgroups, or Ops meetups. Feedback on the event by the attendees was
extremely good.</p>
<h3>The limits of the PTG model</h3>
<p>While the feedback at the event was excellent, over the last year it became
pretty clear that holding a separate PTG created a lot of tension. The most
obvious tension was between PTG and Summit. The PTG was designed as an
additional event, not a replacement. In particular, developers were still very
much wanted at the main <em>Summit</em> event, to maintain the technical level of the
event, to reach out to new contributors and users, to discuss with operators
the future of the project at the <em>Forum</em>. But it is hard to justify traveling
internationally 4 times per year to follow a mature project, so a lot of
people ended up choosing one or the other. Smaller teams usually skipped
the PTG, while a lot in larger teams would skip the Summit. That created
community fragmentation between the ones who could attend 4 events per year
and the ones who could not. And <em>those who could not</em> were on the rise: with
the growth in OpenStack adoption in China, the number of contributors, team
leaders and teams where most members are based in China increased
significantly.</p>
<p>Beyond that, the base of contributors to OpenStack is changing: less and less
vendor-driven and more and more user-driven. That is a generally good thing,
but it means that we are slowly moving away from contributors who are 100%
employed to work upstream (and therefore travel as many times a year as
necessary to maximize that productivity) toward contributors that spend a
couple of hours per week to help upstream (for which travel is at a premium).
There are a lot of things OpenStack needs to change to be more friendly to
this type of contributor, and the PTG format was not really helping in this
transition.</p>
<p>Finally, over the last year it became clear that the days of the 5-day-long
5000-people events were gone. Once the initial curiosity and hype-driven
attendance is passed, and people actually start to understand what OpenStack
can be used for (or not used for), you end up with a less overwhelming event,
with a more reasonable number of attendees and days. Most of the 2015-2016
reasons for a separate event are actually no longer applying.</p>
<h3>Trying a different trade-off</h3>
<p>We ran a number of surveys to evaluate our options -- across Foundation
sponsors, across PTG attendees, across contributors at large. About 60%
of contributors supported co-locating the PTG with the Summit. Even only
considering past PTG attendees, 53% still support co-location. 85% of the
22 top contributing organizations also supported co-location, although some
of the largest ones would prefer to keep it separate. Overall, it felt like
enough of the environment changed that even for those who had benefited
from the event in the past, the solution we had was no longer necessarily
the optimal choice.</p>
<p>In Dublin, then in Vancouver, options were discussed with the Board, the
Technical Committee and the User Committee, and the
<a href="http://lists.openstack.org/pipermail/foundation/2018-June/002598.html">decision was made</a>
to relocate the Project Teams Gathering with the Summits in 2019.
The current plan is to run the first Summit in 2019 from Monday to Wednesday,
then a 3-day PTG from Thursday to Saturday. The Forum would still happen during
the Summit days, so the more strategic discussions that happened at the PTG
could move there.</p>
<p>Obviously, some of the gains of holding the PTG as a separate event will be
lost. In particular, a separate event allowed to have strategic discussions
(the Forum at the Summit) at a separate time in the development cycle as the
more tactical discussions (the PTG). Some of the frustration of discussing
both in the same week, when it's a bit late to influence the cycle focus,
will be restored. The Summit will happen close to releases again, without
giving that much time to vendors to build products on it, or deployers to
try it, reducing the quality of the feedback we get at the Forum.</p>
<p>That said, the co-location will strive to keep as much as we can of what
made the PTG unique and productive. In order to preserve the distinct
productive feel of the event, the PTG will be organized as a <strong>completely
separate event</strong>, with its own registration and branding. It will keep its
unstructured content and dynamic schedule tools. In order to prevent the
activities of the Summit from distracting PTG attendees with outside
commitments, the co-located PTG will happen on entirely separate days,
once the Summit is over. There are only so many days in a week though,
so the trade-off here is to end the PTG on the Saturday.</p>
<p>This change is likely to anger or please you depending on where you stand.
It is important to realize that there is no perfect solution here. Any
solution we choose will be a trade-off between a large number of variables:
including more contributors, maximizing attendee productivity, getting a
critical mass of people present to our events, contain travel cost...
We just hope that this new trade-off will strike a better balance for
OpenStack in 2019, and the Foundation will continue adapt its event strategy
to changing conditions in the future.</p>A framework for lightweight open source governance2018-06-25T13:45:00+02:002018-06-25T13:45:00+02:00Thierry Carreztag:ttx.re,2018-06-25:/lightweight-governance-framework.html<p>Any group of humans needs some form of governance. It’s a set of rules
the group follows in order to address issues and take clear decisions.
Even the absence the rules (anarchy) is a form of governance! At the
opposite end of the spectrum is dictatorship, where all decisions …</p><p>Any group of humans needs some form of governance. It’s a set of rules
the group follows in order to address issues and take clear decisions.
Even the absence the rules (anarchy) is a form of governance! At the
opposite end of the spectrum is dictatorship, where all decisions are
made by one person. Open source projects are groups of humans, and they
are no exception to this. They can opt for various governance models,
which I detailed in a
<a href="https://ttx.re/foss-project-governance-models.html">previous article</a>
four years ago (how time flies!).</p>
<p>That <a href="https://ttx.re/foss-project-governance-models.html">article</a> compared
various overall models in terms of which one would best ensure the long-term
survival of the community, avoiding revolutions (or forks). It advocated for
a representative democracy model, and since then I've been asked several
times for the best recipe to implement it. However there are numerous
trade-offs in the exercise of building governance, and the "best" depends
a lot on the specifics of each project situation. So, rather than detail a
perfect one-size-fits-all governance recipe, in this article I'll propose a
framework of three basic rules to keep in mind when implementing it.</p>
<p>This simple 3-rule model can be used to create <strong>just enough</strong> governance,
a lightweight model that should be sustainable over the long run, while
avoiding extra layers of useless bureaucracy.</p>
<h3>Rule #1: Contributor-driven bodies</h3>
<p>Governance bodies for an open source project should be selected by the
contributors to the project. I'm not talking of governance bodies for
open source Foundations (which generally benefit from having some
representation of their corporate sponsors chiming in on how their
money shall be spent). I'm talking about the upstream open source project
itself, and how the technical choices end up being made in a community of
contributors.</p>
<p>This rule is critical: it ensures that the people contributing code,
documentation, usage experience, mentoring time or any other form of
contribution to the project are aligned with the leadership of the
project. When this rule is not met, generally the leadership and the
contributors gradually drift apart, to the point where the contributors
no longer feel like their leadership represents them. This situation
generally ends with contributors making the disruptive decision to fork
the project under a new, contributor-aligned governance, generally leaving
the old governance body with a trademark and an empty shell to govern.</p>
<p>One corollary of that first rule is that the governance system must
regularly allow replacement of current leaders. Nobody should be appointed
for life, and the contributors should regularly be consulted, especially
in fast-moving communities.</p>
<h3>Rule #2: Aligned with their constituencies</h3>
<p>This is another corollary of the first rule. In larger projects, you need
enough governance bodies to ensure that each is aligned with its own
constituency. In particular, if your community is made of disjoint groups
with little to no overlap in membership, and those groups each need decisions
to be made, they probably need to each have their own governance body at that
level.</p>
<p>The risk we are trying to avoid here is dominance of the larger group over
smaller groups. If you use a single governance body for two (or more)
disjoint groups, chances are that the larger group will dominate the
representative governance body, and therefore will end up making decisions
for the smaller group. This is generally OK for global decisions that affect
every contributor equally, but matters that are solely relevant to the
smaller group should be decided at the smaller group level, otherwise that
group might be tempted to fork to regain final call authority over their
own things.</p>
<h3>Rule #3: Only where decisions are needed</h3>
<p>Strict application of rule #2 tends to result in the creation of a large
number of governance bodies, that's why you need to balance it with rule #3:
only create governance bodies where decisions are actually needed. The art
of lightweight governance is, of course, to find the best balance between
rule #2 and rule #3.</p>
<p>This rule has two practical consequences. The first one is obvious: you should
not create vanity governance bodies, just to give people or organizations a
cool title or badge. Numerous communities fall in the trap of creating
"advisory" boards with appointed seats, to thank long-standing community
members, or give organizations the illusion of control. Those bodies create
extra bureaucracy while not being able to make a single call, or worse,
trying desperately to assert authority to justify their existence.</p>
<p>The second consequence is, before creating a governance body at a certain
level in the project organization, you should question whether decisions
are really needed at that level. If the group needs no final call, or can
trust an upper decision body to make the call if need be, maybe that
governance body is not needed. If two governance bodies need to cooperate
to ensure things work well between them, do you really need to create a
governance body above them, or just encourage discussion and collaboration ?
This trade-off is more subtle, but generally boils down to how badly you
need final decisions to be made, vs. letting independently-made decisions
live alongside.</p>
<p>That is all there is to it! As I said in the introduction, those three
rules are not really a magic recipe, but more of a basic framework to
help you, in the specific situation of your community, build healthy
communities with <em>just enough</em> governance. Let me know if you find it useful!</p>The OpenStack map2018-06-18T15:22:00+02:002018-06-18T15:22:00+02:00Thierry Carreztag:ttx.re,2018-06-18:/the-openstack-map.html<p>In the ancient times (circa 2012), as OpenStack started to grow significantly,
Ken Pepple created a
<a href="http://2.bp.blogspot.com/-o9uMwnV-GQI/UKA0OWX6-BI/AAAAAAAAF8s/CRgqtpNwJxk/s1600/openstack-logical-arch-folsom.jpg">diagram</a>
to represent the various OpenStack components and how information flowed
between them. This diagram took a life of its own, being included in one
version or another in every presentation to show in …</p><p>In the ancient times (circa 2012), as OpenStack started to grow significantly,
Ken Pepple created a
<a href="http://2.bp.blogspot.com/-o9uMwnV-GQI/UKA0OWX6-BI/AAAAAAAAF8s/CRgqtpNwJxk/s1600/openstack-logical-arch-folsom.jpg">diagram</a>
to represent the various OpenStack components and how information flowed
between them. This diagram took a life of its own, being included in one
version or another in every presentation to show in one spaghetti picture the
complexity of OpenStack.</p>
<p>As we kept adding new (more or less optional) components to the mix, we stopped
trying to represent everything in a single diagram, especially as the
Technical Committee refused to special-case some components over others. That
left us with a confusing list of 60+ project teams ranging from Nova to
Winstackers, and no way to represent clearly "OpenStack".</p>
<p>This situation was identified as a key issue by the Board of Directors, the
Technical Committee, the User Committee and the Foundation staff during a
stategic workshop held last year in Boston. As a result, a group formed to
define how to better communicate what OpenStack is, and a subgroup worked more
specifically on a new map to represent OpenStack. Here is the result:</p>
<p><img alt="The OpenStack map v.20180501" src="https://ttx.re/images/map.png"></p>
<p>A number of things you should notice. First, the map is regularly updated.
This is the latest version, from May, 2018. The map is also versioned,
using a date-based number. So if someone copies it for their presentation
and it gets cargo-culted into generations of presentations from there on,
it should be pretty apparent that this may not be the latest available version.</p>
<p>Cartographers know that map design is more about what you leave out than
about what you represent. This map is very opinionated in that respect.
It is designed to be relevant to <strong>consumers</strong> of OpenStack technology.
So it only represents first-order deliverables, things that someone may opt
to install or use. That's the reason why it shows Nova, but not Oslo libraries:
it does not represent second-order deliverables that first-order deliverables
depend on. It also ignores plug-ins or drivers that run on a main deliverable
(like Storlets running onto Swift, Dragonflow running onto Neutron, or
magnum-ui running onto Horizon).</p>
<p>The remaining components are laid out in named "buckets", based on who the
consumer is and what question they answer. There is the main <em>OpenStack</em>
bucket, which contains components that provide a user-facing API, that you
may deploy to extend the capabilities of your cloud deployment. On the right,
the <em>OpenStack-operations</em> bucket contains add-on components that facilitate
operating an OpenStack cloud. On the bottom, the
<em>OpenStack-lifecyclemanagement</em> bucket shows the various solutions you can
use to facilitate installation and lifecycle management of an OpenStack
cloud. On the left, the <em>OpenStack-user</em> bucket contains tools that end users
of OpenStack clouds can install to help interact with a running OpenStack
cloud. And finally, the <em>OpenStack-adjacentenablers</em> bucket contains tools that
help other technology stacks (Kubernetes, NFV...) make use of OpenStack
services.</p>
<p>Inside each bucket, deliverables are approximately categorized based on what
service they deliver. In addition to that, the main <em>OpenStack</em> bucket is
organized in a semi-logical manner (base services at the bottom, higher-level
services at the top). An opinionated set of "core functionality" is marked in
bold to attract the attention of the casual observer to the most-consumed
components.</p>
<p>There are lots of different ways to slice this cake, and a lot of things do
not perfectly fit in the simplistic view that the map presents. The result is
obviously very opinionated, so it cannot please everyone. That's why it's
produced by the Foundation staff, with input from the Technical
Committee, the User Committee and the Board of Directors. That doesn't mean
its design cannot change, or be fixed over time to better represent the
reality.</p>
<p>Working on this exercise really helped me visualize "OpenStack" as a
product. You can see the main product (the <em>OpenStack</em> bucket), separate from
operational add-ons, deployment tools, client tools and technology bridges.
You can see things that do not fit well in the map, or stay at the edges of
the map, that we could consider cutting out if they are not successful.</p>
<p>We hope that this map helps people to visually represent OpenStack and can
replace the infamous spaghetti diagram in future slidedecks. The next step
is to communicate that map more widely, and leverage it more heavily on web
properties like the
<a href="https://www.openstack.org/software/project-navigator">Project Navigator</a>.
You can always find the most recent version of the map at
<a href="http://www.openstack.org/openstack-map">www.openstack.org/openstack-map</a>.</p>OpenStack Spectre/Meltdown FAQ2018-01-08T17:03:00+01:002018-01-08T17:03:00+01:00Thierry Carreztag:ttx.re,2018-01-08:/openstack-spectre-meltdown-faq.html<h2>What are Meltdown and Spectre ?</h2>
<p>Meltdown and Spectre are the brand names of a series of vulnerabilities
discovered by various security researchers around performance optimization
techniques built in modern CPUs. Those optimizations (involving superscalar
capabilities, out-of-order execution, and speculative branch prediction)
fundamentally create a
<a href="https://en.wikipedia.org/wiki/Side-channel_attack">side-channel</a> that can
be exploited to …</p><h2>What are Meltdown and Spectre ?</h2>
<p>Meltdown and Spectre are the brand names of a series of vulnerabilities
discovered by various security researchers around performance optimization
techniques built in modern CPUs. Those optimizations (involving superscalar
capabilities, out-of-order execution, and speculative branch prediction)
fundamentally create a
<a href="https://en.wikipedia.org/wiki/Side-channel_attack">side-channel</a> that can
be exploited to deduce the content of computer memory that should normally
not be accessible.</p>
<h2>Why is it big news ?</h2>
<p>It's big news because rather than affecting a specific operating system,
it affects most modern CPUs, in ways that cannot be completely fixed
(as you can't physically extract the flawed functionality out of your CPUs).
The real solution is in a new generation of CPU optimizations that will
not exhibit the same flaws while reaching the same levels of performance.
This is unlikely to come soon, which means we'll have to deal with workarounds
and mitigation patches for a long time.</p>
<h2>Why is it business as usual ?</h2>
<p>As <a href="https://www.schneier.com/essays/archives/1999/11/a_plea_for_simplicit.html">Bruce Schneier</a>
says, "you can't secure what you don't understand". As we build more complex
systems (in CPUs, in software, in policies), it is more difficult to build
them securely, and they can fail in more subtle ways. There will always be
new vulnerabilities and new classes of attacks found, and the answer is always
the same: designing defense in depth, keeping track of vulnerabilities found,
and swiftly applying patches. This episode might be big news, but the
remediation is still applying well-known techniques and processes.</p>
<h2>Are those 2 or 3 different vulnerabilities ?</h2>
<p>It is actually three different exploitation techniques of the same famility
of vulnerabilities, which need to be protected against separately.</p>
<ul>
<li>
<p><em>CVE-2017-5753</em> (“bounds check bypass”, or variant 1) is one of the two
<em>Spectre</em> variants. It affects specific sequences within compiled applications,
which must be addressed on a per-binary basis. Applications that can be made
to execute untrusted code (e.g. operating system kernels or web browsers) will
need updates as more of those exploitable sequences are found.</p>
</li>
<li>
<p><em>CVE-2017-5715</em> (“branch target injection”, or variant 2) is the other
<em>Spectre</em> variant. It more generally works by poisoning the CPU branch
prediction cache to induce privileged applications to leak small bits of
information. This can be fixed by a CPU microcode update or by applying
advanced software mitigation techniques (like Google's Retpoline) to the
vulnerable binaries.</p>
</li>
<li>
<p><em>CVE-2017-5754</em> (“rogue data cache load”, or variant 3) is also called
<em>Meltdown</em>. This technique lets any unprivileged process read kernel memory
(and therefore access information and secrets in other processes running
on the same system). It is the easiest to exploit, and requires patching
the operating system to reinforce isolation of memory page tables at the
kernel level.</p>
</li>
</ul>
<h2>What is the impact of those vulnerabilities for OpenStack cloud users ?</h2>
<p>Infrastructure as a service harnesses virtualization and containerization
technologies to present a set of physical, bare-metal resources as virtual
computing resources. It heavily relies on the host kernel security features
to properly isolate untrusted workloads, especially the various virtual
machines running on the same physical host. When those fail (like is the
case here), you can have a hypervisor break. An attacker in a hostile VM
running on an unpatched host kernel could use those techniques to access
data in other VMs running on the same host.</p>
<p>Additionally, if the guest operating system of your VMs is not patched
(or you run a vulnerable application) and run untrusted code on that VM
(or in that application), that code could leverage those vulnerabilities
to access information in memory in other processes on the same VM.</p>
<h2>What should I do as an OpenStack cloud provider ?</h2>
<p>Cloud providers should apply kernel patches (from their Linux distribution),
hypervisor software updates (from the distribution or their vendor) and CPU
microcode updates (from their hardware vendor) that workaround or mitigate
those vulnerabilities as soon as they are made available, in order to protect
their users.</p>
<h2>What should I do as an OpenStack cloud user ?</h2>
<p>Cloud users should watch for and apply operating system patches for their
guest VMs as soon as they are made available. This advice actually applies
to any computer (virtual or physical) you happen to use (including your phone).</p>
<h2>Are patches available already ?</h2>
<p>Some patches are out, some are still due. Kernel patches mitigating the
Meltdown attack are available upstream, but they are significant patches
with lots of side-effects, and some OS vendors are still testing them.
The coordinated disclosure process failed to keep the secret up to the
publication date, which explains why some OS vendors or distributions were
not ready when the news dropped.</p>
<p>It is also important to note that this is likely to trigger a long series
of patches, as the workarounds and mitigation patches are refined to reduce
side-effects and new bugs that those complex patches themselves create. The
best recommendation is to keep an eye on your OS vendor patches (and CPU
vendor microcode updates) for the coming months and apply all patches quickly.</p>
<h2>Is there a performance hit in applying those patches ?</h2>
<p>The workarounds and mitigation techniques are still being developed, so it
is a little early to say, and it will always depend on the exact workload.
However, since the basic flaw here lies in performance optimization techniques
in CPUs, most workarounds and mitigation patches should add extra checks,
steps and synchronization that will undo some of that performance
optimization, resulting in a performance hit.</p>
<h2>Is there anything that should be patched on the OpenStack side ?</h2>
<p>While OpenStack itself is not directly affected, it is likely that some of
the patches that are and will be developed to mitigate those issues will
require optimizations in software code to limit the performance penalty.
Keep an eye on our stable branches and/or your OpenStack vendor patches
to make sure you catch any of those.</p>
<p>Those vulnerabilities also shine some light on the power of side-channel
attacks, which shared systems are traditionally more vulnerable to. Security
research is likely to focus on such class of issues in the near future,
potentially discovering side-channel security attacks in OpenStack that
will need to be fixed.</p>
<h2>Where can I learn more ?</h2>
<p>You can find lots of explanations over the Internet. To understand the basic
flaw and the CPU technologies involved, I recommend reading
<a href="https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/">Eben Upton's great post</a>.
If that's too deep or you need a good analogy to tell your less-technical
friends, I find
<a href="https://medium.com/@rgmerk/an-explanation-of-meltdown-and-spectre-for-non-programmers-7e98b0a28da4">this one by Robert Merkel</a> not too bad.</p>
<p>For technical details on the vulnerability themselves,
<a href="https://googleprojectzero.blogspot.fr/2018/01/reading-privileged-memory-with-side.html">Jann Horn's post on Google Project Zero blog</a>
should be first on your list. You can also read the
<a href="https://spectreattack.com/spectre.pdf">Spectre</a>
and <a href="https://meltdownattack.com/meltdown.pdf">Meltdown</a> papers.</p>
<p>For more information on the various mitigation techniques, I recommend
starting with
<a href="https://security.googleblog.com/2018/01/more-details-about-mitigations-for-cpu_4.html">this article from Google's Security blog</a>.
For information about Linux kernel patches in particular, I recommend
<a href="http://kroah.com/log/blog/2018/01/06/meltdown-status/">Greg Kroah-Hartman's post</a>.</p>What to expect from the Queens PTG2017-08-29T12:02:00+02:002017-08-29T12:02:00+02:00Thierry Carreztag:ttx.re,2017-08-29:/queens-ptg.html<p>In less than two weeks, OpenStack upstream developers and project team members
will assemble in Denver, Colorado for a week of team discussions, kickstarting
the <a href="https://releases.openstack.org/queens/schedule.html">Queens development cycle</a>.</p>
<p>Attending <a href="https://www.openstack.org/ptg">the PTG</a> is a great way to make
upstream developers more efficient and productive: participating in the new
development cycle organization …</p><p>In less than two weeks, OpenStack upstream developers and project team members
will assemble in Denver, Colorado for a week of team discussions, kickstarting
the <a href="https://releases.openstack.org/queens/schedule.html">Queens development cycle</a>.</p>
<p>Attending <a href="https://www.openstack.org/ptg">the PTG</a> is a great way to make
upstream developers more efficient and productive: participating in the new
development cycle organization, solving early blockers and long-standing
issues in-person, and building personal relationships to ease interactions
afterwards.</p>
<h2>What changed since Atlanta ?</h2>
<p>The main piece of feedback we received from the Pike PTG in Atlanta was that
with the ad-hoc discussions and dynamic scheduling, it was hard to discover
what was being discussed in every room. This was especially an issue during
the first two days, where lots of vertical team members were around but did
not know which room to go to.</p>
<p>In order to address that issue while keeping the scheduling flexibility that
makes this event so productive, we created an IRC-driven dynamic notification
system. Each room moderator is able to signal what is being discussed right
now, and what will be discussed next in the <code>#openstack-ptg</code> IRC channel. That
input is then collected into a mobile-friendly webpage for easy access. That
page also shows sessions scheduled in the reservable extra rooms via Ethercalc,
so it's a one-stop view of what's being currently discussed in every room,
and what you could be interested in joining next.</p>
<p>The other piece of feedback that we received in Atlanta was that the
horizontal/vertical week slicing was suboptimal. Having all horizontal teams
(QA, Infra, Docs) meet on Monday-Tuesday and all vertical teams (Nova, Cinder,
Swift) meet on Wednesday-Friday was a bit arbitrary and did not make an optimal
use of the time available.</p>
<p>For Denver we still split the week in two, but with a slightly different
pattern. On Monday-Tuesday we'll have inter-team discussions, with rooms
centered more on topics than on teams, focused on solving problems.
On Wednesday-Friday we'll have intra-team discussions, focused on organizing,
prioritizing and bootstrapping the work for the rest of the deployment cycle.
Such a week split won't magically suppress all conflicts obviously, but we
hope it will improve the overall attendee experience.</p>
<h2>What rooms/topics will we have on Monday-Tuesday ?</h2>
<p><strong>Compute stack / VM & BM WG (#compute)</strong>:
In this room, we’ll have discussions to solve inter-project issues within the base compute stack (Keystone, Cinder, Neutron, Nova, Glance, Ironic…).</p>
<p><strong>API SIG (#api)</strong>:
In this room, we’ll discuss API guidelines to further improve the coherence and compatibility of the APIs we present to the cloud user. Members of the SIG will also be hosting guided reviews of potential API changes, see the openstack-dev mailing list for more details.</p>
<p><strong>Infra / QA / RelMgt / Stable / Requirements helproom (#infra)</strong>:
Join this room if you have any questions about or need help with anything related to the development infrastructure, in a large sense. Can be questions around project infrastructure configuration, test jobs (including taking advantage of the new Zuul v3 features), the “Split Tempest plugins” Queens goal, release management, stable branches, global requirements.</p>
<p><strong>Packaging WG (#packaging)</strong>:
In this room, we’ll discuss convergence and commonality across the various ways to deploy OpenStack: Kolla, TripleO, OpenStackAnsible, Puppet-OpenStack, OpenStack Chef, Charms...</p>
<p><strong>Technical Committee / Stewardship WG (#tc)</strong>:
In this room, we’ll discuss project governance issues in general, and stewardship challenges in particular.</p>
<p><strong>Skip-level upgrading (#upgrading)</strong>:</p>
<p>Support for skip-level upgrading across all OpenStack components will be discussed In this room. We’ll also discuss increasing the number of projects that support rolling upgrades, zero-downtime upgrades and zero-impact upgrades.</p>
<p><strong>GUI helproom / Horizon (#horizon)</strong>:
Join this room if you have questions or need help writing a Horizon dashboard for your project, and want to learn about the latest Horizon features. Horizon team members will also discuss Queens cycle improvements here.</p>
<p><strong>Oslo common libraries (#oslo)</strong>:
Current and potential future Oslo libraries will be discussed in this room. Come to discuss pain points or missing features, or to learn about libraries you should probably be using.</p>
<p><strong>Docs / I18n helproom (#docs-i18n)</strong>:
Documentation has gone through a major transition at the end of Pike, with more doc maintenance work in the hands of each project team. The Docs and I18n teams will meet in this room and be available to mentor and give guidance to Doc owners in every team.</p>
<p><strong>Simplification (#simplification)</strong>:
Complexity is often cited as the #1 issue in OpenStack. It is however possible to reduce overall complexity, by removing unused features, or deleting useless configuration options. If you’re generally interested in making OpenStack simpler, join this room!</p>
<p><strong>Make components reusable for adjacent techs (#reusability)</strong>:
We see more and more OpenStack components being reused in open infrastructure stacks built around adjacent technology. In this room we’ll tackle how to improve this component reusability, as well as look into things in adjacent communities we could take advantage of.</p>
<p><strong>CLI / SDK helproom / OpenStackClient (#cli)</strong>:
In this helproom we’ll look at streamlining our client-side face. Expect discussions around OpenStackClient, Shade and other SDKs.</p>
<p><strong>"Policy in code" goal helproom (#policy-in-code)</strong>:
For the Queens cycle we selected “Policy in code” as a cross-project release goal. Some teams will need help and guidance to complete that goal: this room is available to help you explain and make progress on it.</p>
<p><strong>Interoperability / Interop WG / Refstack (#interop)</strong>:
Interoperability between clouds is a key distinguishing feature of OpenStack clouds. The Interop WG will lead discussions around that critical aspect in this room.</p>
<p><strong>User Committee / Product WG (#uc)</strong>:
The User Committee and its associated subteams and workgroups will be present at the PTG too, with a goal all week to close the feedback loop from operators back to developers. This work will be prepared in this room on the first two days of the event.</p>
<p><strong>Security (#security)</strong>:
Security is a process which requires continuous attention. Security-minded folks will gather into this room to further advance key security functionality across all OpenStack components.</p>
<h2>Which teams are going to meet on Wednesday-Friday ?</h2>
<p>The following project teams will meet for all three days:
<strong>Nova</strong>, <strong>Neutron</strong>, <strong>Cinder</strong>, <strong>TripleO</strong>, <strong>Ironic</strong>, <strong>Kolla</strong>,
<strong>Swift</strong>, <strong>Keystone</strong>, <strong>OpenStack-Ansible</strong>, <strong>Infrastructure</strong>, <strong>QA</strong>,
<strong>Octavia</strong>, and <strong>Glance</strong>.</p>
<p>The following project teams plan to only meet for two days, Wednesday-Thursday:
<strong>Heat</strong>, <strong>Watcher</strong>, <strong>OpenStack Charms</strong>, <strong>Trove</strong>, <strong>Congress</strong>,
<strong>Barbican</strong>, <strong>Mistral</strong>, <strong>Freezer</strong>, <strong>Sahara</strong>, <strong>Glare</strong>, and
<strong>Puppet OpenStack</strong>.</p>
<h2>Join us!</h2>
<p>We already have more than 360 people signed up, but we still have room for you!
<a href="https://www.eventbrite.com/e/project-teams-gathering-denver-2017-tickets-33219389087">Join us</a> if you can. The ticket price will increase this Friday though,
so if you plan to register I'd advise you to do so ASAP to avoid the
last-minute price hike.</p>
<p>The event hotel is pretty full at this point (with the last rooms available
priced accordingly), but there are
<a href="http://lists.openstack.org/pipermail/openstack-dev/2017-August/121192.html">lots of other options</a>
nearby.</p>
<p>See you there!</p>Introducing OpenStack SIGs2017-08-10T14:12:00+02:002017-08-10T14:12:00+02:00Thierry Carreztag:ttx.re,2017-08-10:/introducing-sigs.html<p>Back in March in Boston, the OpenStack Board of Directors, Technical Committee,
User Committee and Foundation staff members met for a strategic workshop. The
goal of the workshop was to come up with a list of key issues needing attention
from OpenStack leadership. One of the strategic areas that emerged …</p><p>Back in March in Boston, the OpenStack Board of Directors, Technical Committee,
User Committee and Foundation staff members met for a strategic workshop. The
goal of the workshop was to come up with a list of key issues needing attention
from OpenStack leadership. One of the strategic areas that emerged from that
workshop is the need to improve the feedback loop between users and developers
of the software. Melvin Hillsman volunteered to lead that area.</p>
<h3>Why SIGs ?</h3>
<p>OpenStack was quite successful in raising an organized, vocal, and engaged
user community. However the developer and user communities are still mostly
acting as separate communities. Improving the feedback loop starts with
putting everyone caring about the same problem space in the same rooms and
work groups. The Forum (removing the artificial line between the Design Summit
and the Ops Summit) was a first step in that direction. SIGs are another step
in addressing that problem.</p>
<p>Currently in OpenStack we have various forms of workgroups, all attached to
a specific OpenStack governance body: User Committee workgroups (like the
Scientific WG or the Large Deployment WG), upstream workgroups (like the API
WG or the Deployment WG), or Board workgroups. Some of those are very focused
on a specific segment of the community, so it makes sense to attach them to a
specific governance body. But most are just a group of humans interested in
tackling a specific problem space together, and establishing those groups in
a specific community corner sends the wrong message and discourages
participation from everyone in the community.</p>
<p>As a result (and despite our efforts to communicate that everyone is welcome),
most TC-governed workgroups lack operator participants, and most UC-governed
workgroups lack developer participants. It's clearly not because the scope
of the group is one-sided (developers are interested in scaling issues,
operators are interested in deployment issues). It's because developers
assume that a user committee workgroup about "large deployments" is meant
to gather operator feedback rather than implementing solutions. It's because
operators assume that an upstream-born workgroup about "deployment" is only
to explore development commonalities between the various deployment strategies.
Or they just fly below the other group's usual radar. SIGs are about breaking
the artificial barriers and making it clear(er) that workgroups are for
everyone, by disconnecting them from the governance domains and the useless
upstream/downstream division.</p>
<h3>SIGs in practice</h3>
<p>SIGs are neither operator-focused nor developer-focused. They are open groups,
with documented guidance on how to get involved. They have a scope, a clear
description of the problem space they are working to address, or of the use
case they want to better support in OpenStack. Their membership includes
affected users that can discuss the pain points and the needs, as well as
development resources that can pool their efforts to achieve the groups goals.
Ideally everyone in the group straddles the artificial line between operators
and developers and identifies as a little of both.</p>
<p>In practice, SIGs are not really different from the various forms of workgroups
we already have. You can continue to use the same meetings, git repositories,
and group outputs that you used to have. To avoid systematic cross-posting
between the openstack-dev and the openstack-operators mailing-lists, SIG
discussions can use the new <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs">openstack-sigs mailing-list</a>,
SIG members can take advantage of our various events (<a href="https://www.openstack.org/ptg">PTG</a>,
Ops meetups, Summits) to meet in person.</p>
<h3>Next steps</h3>
<p>We are only getting started. So far we only have one SIG: the "Meta" SIG, to
discuss advancement of the SIG concept. Several existing workgroups have
expressed their willingness to become early adopters of the new concept, so
we'll have more soon. If your workgroup is interested in being branded as a
SIG, let Melvin or myself know, we'll guide you through the process (which at
this point only involves being listed on a
<a href="https://wiki.openstack.org/wiki/OpenStack_SIGs">wiki page</a>). Over time we
expect SIGs to become the default: most community-specific workgroups would
become cross-community SIGs, and the remaining workgroups would become more
like subteams of their associated governance body.</p>
<p>And if you have early comments or ideas on SIGs, please join the Meta
discussion on the
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs">openstack-sigs mailing-list</a>,
(using the [meta] subject prefix)!</p>Using proprietary services to develop open source software2017-02-19T14:00:00+01:002017-02-19T14:00:00+01:00Thierry Carreztag:ttx.re,2017-02-19:/using-proprietary-to-develop-oss.html<p>It is now pretty well accepted that open source is a superior way of
producing software. Almost everyone is doing open source those days.
In particular, the ability for users to look under the hood and make
changes results in tools that are better adapted to their workflows.
It reduces …</p><p>It is now pretty well accepted that open source is a superior way of
producing software. Almost everyone is doing open source those days.
In particular, the ability for users to look under the hood and make
changes results in tools that are better adapted to their workflows.
It reduces the cost and risk of finding yourself locked-in with a vendor
in an unbalanced relationship. It contributes to a virtuous circle of
continuous improvement, blurring the lines between consumers and producers.
It enables everyone to remix and invent new things. It adds up to the
common human knowledge.</p>
<h3>And yet</h3>
<p>And yet, a lot of open source software is developed on (and with the help
of) proprietary services running closed-source code. Countless open source
projects are developed on GitHub, or with the help of Jira for bugtracking,
Slack for communications, Google docs for document authoring and sharing,
Trello for status boards. That sounds a bit paradoxical and hypocritical --
a bit too much "do what I say, not what I do". Why is that ? If we agree
that open source has so many tangible benefits, why are we so willing to
forfeit them with the very tooling we use to produce it ?</p>
<h3>But it's free !</h3>
<p>The argument usually goes like this: those platforms may be proprietary, they
offer great features, and they are provided free of charge to my open source
project. Why on Earth would I go through the hassle of setting up,
maintaining, and paying for infrastructure to run less featureful solutions ?
Or why would I pay for someone to host it for me ? The trick is, as the
saying goes, when the product is free, <strong>you</strong> are the product. In this case,
your open source community is the product. In the worst case scenario, the
personal data and activity patterns of your community members will be sold
to 3rd parties. In the best case scenario, your open source community is
recruited by force in an army that furthers the network effect and makes it
even more difficult for the next open source project to not use that
proprietary service. In all cases, you, as a project, decide to not bear the
direct cost, but ask each and every one of your contributors to pay for it
indirectly instead. You force all of your contributors to accept the
ever-changing terms of use of the proprietary service in order to participate
to your "open" community.</p>
<h3>Recognizing the trade-off</h3>
<p>It is important to recognize the situation for what it is. A trade-off.
On one side, shiny features, convenience. On the other, a lock-in of your
community through specific features, data formats, proprietary protocols or
just plain old network effect and habit. Each situation is different. In
some cases the gap between the proprietary service and the open platform
will be so large that it makes sense to bear the cost. Google Docs is pretty
good at what it does, and I find myself using it when collaborating on
something more complex than etherpads or ethercalcs. At the opposite end of
the spectrum, there is really <strong>no</strong> reason to use Doodle when you can use
<a href="https://framadate.org">Framadate</a>. In the same vein, <a href="https://wekan.io">Wekan</a>
is close enough to Trello that you should really consider it as well.
For Slack vs. <a href="https://about.mattermost.com">Mattermost</a> vs. IRC, the
trade-off is more subtle. As a sidenote, the cost of lock-in is a lot
reduced when the proprietary service is built on standard protocols. For
example, GMail is not that much of a problem because it is easy enough to
use IMAP to integrate it (and possibly move away from it in the future).
If Slack was just a stellar opinionated client using IRC protocols and
servers, it would also not be that much of a problem.</p>
<h3>Part of the solution</h3>
<p>Any simple answer to this trade-off would be dogmatic. You are not unpure
if you use proprietary services, and you are not wearing blinders if you
use open source software for your project infrastructure. Each community
will answer that trade-off differently, based on their roots and history.
The important part is to acknowledge that nothing is free. When the choice
is made, we all need to be mindful of what we gain, and what we lose.
To conclude, I think we can all agree that all other things being equal, when
there is an open-source solution which has all the features of the
proprietary offering, we all prefer to use that. The corollary is, we all
benefit when those open-source solutions get better. So to be part of the
solution, consider helping those open source projects build something as
good as the proprietary alternative, especially when they are pretty close
to it feature-wise. That will make solving that trade-off a lot easier.</p>So you want to create a new official OpenStack project...2017-01-16T14:00:00+01:002017-01-16T14:00:00+01:00Thierry Carreztag:ttx.re,2017-01-16:/create-official-openstack-project.html<p>OpenStack development is organized around a mission, a
<a href="https://governance.openstack.org/tc/reference/charter.html">governance model</a>
and a <a href="https://governance.openstack.org/tc/reference/principles.html">set of principles</a>.
Project teams apply for inclusion, and the
<em><a href="https://governance.openstack.org/tc/">Technical Committee</a></em> (TC),
elected by all OpenStack contributors, judges whether that team work helps
with the OpenStack mission and follows the OpenStack development principles.
If it does, the …</p><p>OpenStack development is organized around a mission, a
<a href="https://governance.openstack.org/tc/reference/charter.html">governance model</a>
and a <a href="https://governance.openstack.org/tc/reference/principles.html">set of principles</a>.
Project teams apply for inclusion, and the
<em><a href="https://governance.openstack.org/tc/">Technical Committee</a></em> (TC),
elected by all OpenStack contributors, judges whether that team work helps
with the OpenStack mission and follows the OpenStack development principles.
If it does, the team is considered part of the OpenStack development
community, and its work is considered an official OpenStack project.</p>
<p>The main effect of being official is that it places the team work under the
oversight of the Technical Committee. In exchange, recent contributors to that
team are considered
<em><a href="https://governance.openstack.org/tc/reference/charter.html#voters-for-tc-seats-atc">Active Technical Contributors</a></em>
(ATCs), which means they can participate in the vote to elect the Technical
Committee.</p>
<h3>Why ?</h3>
<p>When you want to create a new official OpenStack project, the first thing to
check is whether you're doing it for the right reasons. In particular, there
is no need to be an official OpenStack project to benefit from our outstanding
project infrastructure (git repositories, Gerrit code reviews, cloud-powered
testing and gating). There is also no need to place your project under the
OpenStack Technical Committee oversight to be allowed to work on something
related to OpenStack. And the ATC status no longer brings additional benefits,
beyond the TC election voting rights.</p>
<p>From a development infrastructure standpoint, OpenStack provides the
governance, the systems and the neutral asset lock to create open
collaboration grounds. On those grounds multiple organizations and
individuals can cooperate on a level playing field, without one
organization in particular owning a given project.</p>
<p>So if you are not interested in having new organizations contribute to your
project, or would prefer to retain full control over it, it probably makes
sense to <strong>not</strong> ask to become an official OpenStack project. Same if you
want to follow slightly-different principles, or want to relax certain
community rules, or generally would like to behave a lot differently than other
OpenStack projects.</p>
<h3>What ?</h3>
<p>Still with me ? So... What would be a good project team to propose for
inclusion ? The most important aspect is that the topic you're working on
must help further the OpenStack Mission, which is <em>to produce a ubiquitous
Open Source Cloud Computing platform that is easy to use, simple to implement,
interoperable between deployments, works well at all scales, and meets the
needs of users and operators of both public and private clouds</em>.</p>
<p>It is also very important that the team seamlessly merges into the OpenStack
Community. It must adhere to the
<a href="https://governance.openstack.org/tc/reference/opens.html">4 Opens</a>
and follow the OpenStack
<a href="https://governance.openstack.org/tc/reference/principles.html">principles</a>.
The Technical Committee made a number of choices to avoid fragmenting the
community into several distinct silos. All projects use Gerrit to propose
changes, IRC to communicate, a set of
<a href="https://governance.openstack.org/tc/resolutions/20150901-programming-languages.html">approved programming languages</a>...
Those rules are not set in stone, but we are unlikely to change them just
to facilitate the addition of one given new project team. All those
requirements are summarized in the
<a href="https://governance.openstack.org/tc/reference/new-projects-requirements.html">new project requirements</a>
document.</p>
<p>The new team must also know its way around our various systems, development
tools and processes. Ideally the team would be formed from existing OpenStack
community members; if not the
<a href="http://docs.openstack.org/project-team-guide/">Project Team Guide</a>
is there to help you getting up to speed.</p>
<h3>Where ?</h3>
<p>OK, you're now ready to make the plunge. One question you may ask yourself
is whether you should contribute your project to an existing project team,
or ask to become a new official project team.</p>
<p>Since the recent
<a href="https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html">project structure reform</a>
(a.k.a. the "big tent"), work in OpenStack is organized around groups of
people, rather than the general topic of your work. So you don't have to ask
the Neutron team to adopt your project just because it is about networking.
The real question is more... <em>is it the same team working on both
projects ?</em> Does the existing team feel like they can vouch for this new
work, and/or are willing to adapt their team scope to include it ? Having
two different groups under a single team and PTL only creates extra
governance problems. So if the teams working on it are distinct enough,
then the new project should probably be filed separately.</p>
<p>Another question you may ask yourself is whether alternate implementations
of the same functionality are OK. Is competition allowed between official
projects ? On one hand competition means dilution of effort, so you want
to minimize it. On the other you don't want to close evolutionary paths,
so you need to let alternate solutions grow. The Technical Committee answer
to that is: alternate solutions are allowed, as long as they are not
<em>gratuitously competing</em>. Competition must be between two different
technical approaches, not two different organizations or egos. Cooperation
must be considered first. This is all the more important the deeper you go
in the stack: it is obviously a lot easier to justify competition on an
OpenStack installer (which consumes all other projects), than on AuthN/AuthZ
(which all other projects rely on).</p>
<h3>How ?</h3>
<p>Let's do this ! How to proceed ? The first and hardest part is to pick a
name. We want to avoid having to rename the project later due to trademark
infringement, once it has built some name recognition. A good rule of thumb
is that if the name sounds good, it's probably already used somewhere.
Obscure made-up names, or word combinations are less likely to be a registered
trademark than dictionary words (or person names). Online searches can help
weeding out the worst candidates. Please be good citizens and also avoid
collision with other open source project names, even if they are not
trademarked.</p>
<p>Step 2, you need to create the project on OpenStack infrastructure. See the
<a href="http://docs.openstack.org/infra/manual/creators.html">Infra manual</a>
for instructions, and reach out on the #openstack-infra IRC channel if you
need help.</p>
<p>The final step is to propose a change to the
<a href="http://git.openstack.org/cgit/openstack/governance">openstack/governance</a>
repository, to add your project team to the
<a href="http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml">reference/projects.yaml</a>
file. That will serve as the official request to the Technical Committee,
so be sure to include a very informative commit message detailing how well
you fill the
<a href="https://governance.openstack.org/tc/reference/new-projects-requirements.html">new projects requirements</a>.
Good examples of that would be
<a href="https://review.openstack.org/#/c/402227/">this change</a>
or <a href="https://review.openstack.org/#/c/353693/">this one</a>.</p>
<h3>When ?</h3>
<p>The timing of the request is important. In order to be able to assess
whether the new team behaves like the rest of the OpenStack community,
the Technical Committee usually requires that the new team operates on
OpenStack infrastructure (and collaborates on IRC and the mailing-list)
for a few months.</p>
<p>We also tend to freeze new team applications during the second part of
the development cycles, as we start preparing for the release and the PTG.
So the optimal timing would be to set up your project on OpenStack
infrastructure around the middle of one cycle, and propose for official
inclusion at the start of the next cycle (before the first development
milestone). Release schedules are published
<a href="https://releases.openstack.org/">here</a>.</p>
<h3>That's it !</h3>
<p>I hope this article will help you avoid the most obvious traps
in your way to become an official OpenStack project. Feel free to reach out
to me (or any other
<a href="https://governance.openstack.org/tc/">Technical Committee member</a>)
if you have questions or would like extra advice !</p>OpenStack TC mythbusting2016-11-28T10:38:00+01:002016-11-28T10:38:00+01:00Thierry Carreztag:ttx.re,2016-11-28:/openstack-tc-mythbusting.html<p>On several occasions over the last months, I heard people exposing <em>truths</em>
about the OpenStack Technical Committee. However, those positions were often
inaccurate or incomplete. Arguably we are not communicating enough: governance
changes and resolutions that are brought to the TC are just approved or
rejected. That binary answer is …</p><p>On several occasions over the last months, I heard people exposing <em>truths</em>
about the OpenStack Technical Committee. However, those positions were often
inaccurate or incomplete. Arguably we are not communicating enough: governance
changes and resolutions that are brought to the TC are just approved or
rejected. That binary answer is generally not the whole story, and with only
the headline, it is easy to read too much in a past decision. We should do a
better job at communicating beyond that simple answer when the topic is more
complex, and at continuously explaining the role and limits of the TC.
Hopefully this blogpost will help, by busting some of those myths.</p>
<h3>Myth #1: "the TC doesn't want Go in OpenStack"</h3>
<p>This one comes from the recent rejection of a resolution proposing to add
golang to the list of approved languages, to support merging the <em>hummingbird</em>
feature branch into Swift's <em>master</em> branch. A more accurate way to present
that decision would be to say that a (short) majority of the TC members was not
supporting the addition of golang at that time and under the proposed
conditions. In summary it was more of a "not now, not this way" than a "never".</p>
<p>There were a number of reasons for this decision, but the crux was: adding a
language creates fragmentation and a support cost for all of the OpenStack
community, so we need to make sure "OpenStack" as a whole is successful with
it, beyond any specific project. That means for example having a clear plan
for integrating Go within all our standing processes, while trying to prevent
duplication of effort or gratuitous rewrites. The discussion was actually
recently restarted by Flavio in
<a href="https://review.openstack.org/#/c/398875/">this change</a>. We'll need
resources to make this a success, so if you care, please jump in to help.</p>
<h3>Myth #2: "the TC doesn't like competition with existing projects"</h3>
<p>I'm not exactly sure where this one comes from. I can't think of any specific
TC decision that would explain it. Historically, back when we had <em>programs</em>,
a given team would own a given problem space and could essentially lock
alternatives out. But the <em>Big Tent</em> project structure reform changed that,
allowing competition to happen within our community.</p>
<p>Yes, we still want to encourage cooperation wherever it's possible, so we
still have a requirement which says "<em>where it makes sense, the project
cooperates with existing projects rather than gratuitously competing or
reinventing the wheel</em>". But as long as the new project is different or
presents a different value proposal, it should be fair game. For example we
accepted the Monasca team in the same problem space as the Telemetry team.
And we have several teams working on various deployment recipes.</p>
<p>That said, having competitive solutions in OpenStack on key problem spaces
(like base compute or networking services) creates interesting second-order
questions in terms of what is "core", trademark usage and interoperability.
Those are arguably more downstream concerns than upstream concerns, but that
explains why the deeper you go, the more difficult the community discussion
is likely to be.</p>
<h3>Myth #3: "the TC does not set any direction"</h3>
<p>There are multiple variants on that one, from "OpenStack needs a benevolent
dictator" to "a camel is a horse designed by a committee". The idea behind it
is that the TC needs to have a very opinionated plan for OpenStack and somehow
force everyone in our community to follow it. Part of this myth is trying to
apply single-vendor software development theory to an open collaboration, and
misunderstanding how other large open source projects (like the Linux kernel)
work.</p>
<p>While the TC members are all well-respected in our community, we can't
unilaterally decide everything for 2500+ developers from 300+ different
organizations, and expect them all to execute The Plan. What the TC can do,
however, is to define the mission, provide an environment, set principles,
enforce common practices, and arbitrate conflicts. In painting terms, the TC
provides the frame, the subject of the painting, a color palette and the
techniques that can be used. But it doesn't paint. Painting is done at each
project team level.</p>
<p>In this cycle, the TC started to drive some simple cross-community goals. The
idea is to collectively make visible progress on a given topic over the course
of a release cycle, to pay back technical debt or to implement a simple feature
across all of OpenStack projects. But this is done as a goal the community
agrees to work on, rather than a top-down mandate.</p>
<h3>Myth #4: "the TC, due to the Big Tent, prevents proper focus"</h3>
<p>This one is interesting, and I think its roots lie in some misunderstanding of
open source community dynamics. If you consider a finite set of resources and
a zero-sum-game community, then of course adding more projects results in less
resources being dedicated to "important projects". But an open community like
OpenStack is not a finite set of resources. The people and the organizations
in the OpenStack community work and cooperate on a number of projects. Before
the big tent, some of those would not be considered part of the OpenStack
projects, despite helping with the OpenStack mission and following the
OpenStack development principles, and therefore essentially being developed
by the OpenStack community. </p>
<p>Considering more (or less) projects as being part of our community doesn't
decrease (or increase) focus on "important projects". It's up to each
organization in OpenStack to focus on the set of projects it considers
important. For more on that, go read Ed Leafe's
<a href="https://blog.leafe.com/openstack-focus/">brilliant blogpost</a>, he expressed
it better than I can. Of course there are some efforts (like packaging)
where adding more projects results in diluting focus. But with every added
project comes new blood in our community (rather than artificially keeping it
out), and some of that new blood ends up helping on those efforts. It's not
a zero-sum game, and the big tent makes sure we are open to new ways of
achieving the OpenStack mission and have an inclusive and welcoming community.</p>What is the role of the OpenStack Technical Committee2016-10-03T12:45:00+02:002016-10-03T12:45:00+02:00Thierry Carreztag:ttx.re,2016-10-03:/role-of-the-tc.html<p>This week we are <a href="http://governance.openstack.org/election/">renewing 6 seats</a>
in the 13-member OpenStack Technical Committee. This election has attracted
a large number of candidates, which is a great sign that people care about
the Technical Committee. At the same time, there is a lot of misconceptions
in our community about what the …</p><p>This week we are <a href="http://governance.openstack.org/election/">renewing 6 seats</a>
in the 13-member OpenStack Technical Committee. This election has attracted
a large number of candidates, which is a great sign that people care about
the Technical Committee. At the same time, there is a lot of misconceptions
in our community about what the TC is for, what it can or cannot do, and the
overlap with other groups. I'll try to clarify that in this post.</p>
<h3>A misleading name</h3>
<p>Part of the reason why there are so many misconceptions about the role of the
TC is that its name is pretty misleading. The Technical Committee is not
primarily technical: most of the issues that the TC tackles are open source
project governance issues. The TC is not really a committee either: it is a
group of elected people who will vote on resolutions and changes that are
proposed and which affect OpenStack as a whole. In the US, it is closer to
the Supreme Court than to anything else.</p>
<p>The primary role of the TC is to act as the final decision stage when it comes
to decision-making in the OpenStack open source project. It is extremely
important in an open source project to have a "bucket stops here" body that is
empowered to make decisions for the project if consensus and agreement cannot
be reached elsewhere -- otherwise it risks complete stand-still. I have been
involved in projects which were stuck in such governance grey area, and it
is a pretty ugly situation. It's interesting to note that the mere existence
of the final decision-making body is enough to achieve consensus at the lower
levels: two teams will prefer to find agreement between themselves rather
than call for final arbitration by the TC. This is a feature, not a bug.</p>
<h3>An evolving mission</h3>
<p>The mission of the TC is to lead the development of "OpenStack". As OpenStack
evolved into one framework with a lot of collaborating components, each of
which developed by project teams with their own governance, the mission of
the TC also evolved. Those days we focus on the "OpenStack" experience. What
does it mean to be an OpenStack project ? What does it imply in terms of
development practices, general principles, common goals, cooperation, minimal
QA or user experience ? Is this new group applying to become an official
OpenStack project team following enough of those rules to be considered "one
of us" ?</p>
<p>Some people would like the TC to single-handedly solve upgrades, scalability,
interoperability or end user experience. Some other people would like the
TC to let the individual project teams do as they want. Reality is, neither
of those extremes are likely to happen. The TC is just 13 (usually busy)
people, they can't solve all the issues in OpenStack by themselves. They are
elected because we trust them to make the right decisions for OpenStack, not
because they are the ultimate 100x engineers who can fix everything. On the
other hand, the TC is not powerless: by continuously refining what it means to
be an "OpenStack project", it can influence OpenStack project teams to address
the right topics. We did that through assert tags, which encouraged teams to
improve their upgrade story or adopt a sane feature deprecation policy. We'll
do that through release cycle goals, to drive visible improvements across
all the components in OpenStack.</p>
<h3>Help, please</h3>
<p>This is also why the TC needs all the help it can get. I'm excited we now have
the <a href="https://wiki.openstack.org/wiki/Meetings/Arch-WG">Architecture workgroup</a>,
an open group of people interested in addressing long-standing architecture
issues in OpenStack as a whole. I'm thrilled that we have a
<a href="https://wiki.openstack.org/wiki/Meetings/SWGMeeting">Stewardship workgroup</a>,
an open group of people encouraging leaders in OpenStack to adopt Servant
Leadership practices and tools. We are electing 13 members to vote and make
the final calls, but all the ideas don't have to come from those 13 members.
If you're a candidate and you're not elected, it doesn't prevent you from
working on governance problems and propose changes.</p>
<p>So... If you are an eligible voter, please take the time to read the candidates
platform emails and vote. Whoever gets elected, they will need the legitimacy
that only a good turnout in elections can give them. And if you are a candidate
and don't get elected, please consider joining those open workgroups, and
propose governance changes -- keeping "OpenStack" together is definitely not
a 13-people task.</p>How splitting the Design Summit enhances the development process2016-04-20T14:02:00+02:002016-04-20T14:02:00+02:00Thierry Carreztag:ttx.re,2016-04-20:/how-splitting-enhances-dev-process.html<p>I was recently privately asked how I expected the split of the current
OpenStack Design Summit into two different events to enhance the overall
OpenStack development process. Here is the answer I gave...</p>
<p>We are working on splitting the "Design Summit" into a more open
requirements-gathering and feedback forum at …</p><p>I was recently privately asked how I expected the split of the current
OpenStack Design Summit into two different events to enhance the overall
OpenStack development process. Here is the answer I gave...</p>
<p>We are working on splitting the "Design Summit" into a more open
requirements-gathering and feedback forum at the Summit on one side,
and a separated event for project team members on the other side. I expect
this change will greatly enhance the development process, for the following
reasons:</p>
<ol>
<li>
<p>Key upstream developers and PTLs were too busy during the Summit week
to actually watch and listen. Having them more available during the Summit
week (where all of our community is present) should help a lot in getting
the right priorities across. The feedback sessions were also not very well
balanced (being organized as part of the upstream developer-centric event
called the "Design Summit"). Making it more like a regular Summit event will
help make it a neutral exchange forum between community peers (rather than
dev kings listening to grievances in their throne hall).</p>
</li>
<li>
<p>Project teams were lacking time to work together as a team: building
trust, organizing the work, agreeing on priorities and assigning tasks. The
current "Design Summit" doesn't work so well for that because it doubles as
a general forum and a lot of people outside the team members were attending
the sessions. There were also too many distractions to hang out between team
members and build social bonds. This is why a lot of teams organized specific,
separated events (the "midcycles"): to get more time together. The new
"Project Teams Gathering" event is all about providing that time to work
together as a team.</p>
</li>
<li>
<p>Partly as a result of separate midcycle events, project teams operated in
silos and were unlikely to be exposed or take on critical cross-project work.
They would also skip the cross-project workshops at the Design Summit since so
much is happening at the same time. The new event should provide specific time
for cross-project work, without anything running against it. It should
encourage team members from a vertical team (Nova, Neutron...) to join and
participate in horizontal / cross-project efforts (QA, Infra, technology
convergence, release theme...), to break out of their silo and become true
"OpenStack" contributors.</p>
</li>
<li>
<p>The timing of the "Design Summit" event was inefficient. It was too late
to organize work, too early to get feedback on the recent release. By splitting
the events, we put the main Summit further away from the release -- giving time
for packagers, deployers, solutions builders to build a product and start
experimenting with the new release. The quality of the feedback we get should
therefore improve a lot. It also lets us put the new event closer to the start
of the development cycle (closer to the previous cycle feature freeze),
ensuring there is less "down" time between cycles (almost two months in the
current setup).</p>
</li>
</ol>
<p>So, in summary, I expect the new split event format to solve long-standing
issues that made the "Design Summit" no longer efficient. It should increase
our productivity, but also greatly improve the feedback loops between
downstream and upstream.</p>Splitting out the OpenStack Design Summit2016-02-22T16:22:00+01:002016-02-22T16:22:00+01:00Thierry Carreztag:ttx.re,2016-02-22:/splitting-out-design-summit.html<p>In a global and virtual community, high-bandwidth face-to-face time is
essential. This is why we made the OpenStack Design Summits an integral
part of our processes from day 0. Those were set at the beginning of each
of our development cycles to help set goals and organize the work for …</p><p>In a global and virtual community, high-bandwidth face-to-face time is
essential. This is why we made the OpenStack Design Summits an integral
part of our processes from day 0. Those were set at the beginning of each
of our development cycles to help set goals and organize the work for the
upcoming 6 months. At the same time and in the same location, a more
traditional conference was happening, ensuring a lot of interaction between
the upstream (producers) and downstream (consumers) parts of our community.</p>
<p>This setup, however, has a number of issues. For developers first: the
"conference" part of the common event got bigger and bigger and it is
difficult to focus on upstream work (and socially bond with your teammates)
with so much other commitments and distractions. The result is that our
design summits are a lot less productive than they used to be, and we
organize other events ("midcycles") to fill our focus and small-group
socialization needs. The timing of the event (a couple of weeks after the
previous cycle release) is also suboptimal: it is way too late to gather any
sort of requirements and priorities for the already-started new cycle, and
also too late to do any sort of work planning (the cycle work started almost
2 months ago).</p>
<p>But it's not just suboptimal for developers. For contributing companies,
flying all their developers to expensive cities and conference hotels so
that they can attend the Design Summit is pretty costly, and the goals of
the summit location (reaching out to users everywhere) do not necessarily
align with the goals of the Design Summit location (minimize and balance
travel costs for existing contributors). For the companies that build products
and distributions on top of the recent release, the timing of the common event
is not so great either: it is difficult to show off products based on the
recent release only two weeks after it's out. The summit date is also too
early to leverage all the users attending the summit to gather feedback on
the recent release -- not a lot of people would have tried upgrades by
summit time. Finally a common event is also suboptimal for the events
organization : finding venues that can accommodate both events is becoming
increasingly complicated.</p>
<p>Time is ripe for a change. After Tokyo, we at the Foundation have been
considering options on how to evolve our events to solve those issues. This
proposal is the result of this work. There is no perfect solution here (and
this is still work in progress), but we are confident that this strawman
solution solves a lot more problems than it creates, and balances the needs
of the various constituents of our community.</p>
<p>The idea would be to split the events. The first event would be for upstream
technical contributors to OpenStack. It would be held in a simpler, scaled-back
setting that would let all OpenStack project teams meet in separate rooms,
but in a co-located event that would make it easy to have ad-hoc cross-project
discussions. It would happen closer to the centers of mass of contributors,
in less-expensive locations.</p>
<p>More importantly, it would be set to happen a couple of weeks <strong>before</strong> the
previous cycle release. There is a lot of overlap between cycles. Work on a
cycle starts at the previous cycle feature freeze, while there is still 5
weeks to go. Most people switch full-time to the next cycle by RC1.
Organizing the event just after that time lets us organize the work and
kickstart the new cycle at the best moment. It also allows us to use our time
together to quickly address last-minute release-critical issues if such
issues arise.</p>
<p>The second event would be the main downstream business conference, with
high-end keynotes, marketplace and breakout sessions. It would be organized
two or three months <strong>after</strong> the release, to give time for all downstream
users to deploy and build products on top of the release. It would be the best
time to gather feedback on the recent release, and also the best time to have
strategic discussions: start gathering requirements for the next cycle,
leveraging the very large cross-section of all our community that attends
the event.</p>
<p>To that effect, we'd still hold a number of strategic planning sessions at
the main event to gather feedback, determine requirements and define overall
cross-project themes, but the session format would not require all project
contributors to attend. A subset of contributors who would like to participate
in these sessions can collect and relay feedback to other team members for
implementation (similar to the Ops midcycle). Other contributors will also
want to get more involved in the conference, whether that's giving
presentations or hearing user stories.</p>
<p>The split should ideally reduce the needs to organize separate in-person
mid-cycle events. If some are still needed, the main conference venue and
time could easily be used to provide space for such midcycle events (given
that it would end up happening in the middle of the cycle).</p>
<p>In practice, the split means that we need to stagger the events and cycles.
We have a long time between Barcelona and the Q1 Summit in the US, so the
idea would be to use that long period to insert a smaller cycle (Ocata) with
a release early March, 2017 and have the first specific contributors event
at the start of the P cycle, mid-February, 2017. With the already-planned
events in 2016 and 2017 it is the earliest we can make the transition. We'd
have a last, scaled-down design summit in Barcelona to plan the shorter cycle.</p>
<p><img alt="cycle_overlap" src="https://ttx.re/images/overlap2.png"></p>
<p>With that setup, we hope that we can restore the productivity and focus of
the face-to-face contributors gathering, reduce the need to have midcycle
events for social bonding and team building, keep the cost of getting all
contributors together once per cycle under control, maintain the feedback
loops with all the constituents of the OpenStack community at the main event,
and better align the timing of each event with the reality of the release
cycles.</p>
<p>NB: You will note that I did not name the separated event "Design Summit" --
that is because <em>Design</em> will now be split into feedback/requirements
gathering (the <strong>why</strong> at the main event) and execution planning and
kickstarting (the <strong>how</strong> at the contributors-oriented event), so that name
doesn't feel right anymore. We can bikeshed on the name for the new event
later :)</p>
<p>This was also posted to the <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-February/087161.html">openstack-dev ML</a>:
please comment and follow-up there if you have thoughts to share.</p>OpenStack Common Culture2015-08-27T14:12:00+02:002015-08-27T14:12:00+02:00Thierry Carreztag:ttx.re,2015-08-27:/openstack-common-culture.html<p>We are 5 years into the OpenStack ride (and 3 years into the OpenStack
Foundation ride), and the challenges for our community are evolving.
In this article I want to talk about what I consider the most significant
threat for our open source community today: the loss of our common …</p><p>We are 5 years into the OpenStack ride (and 3 years into the OpenStack
Foundation ride), and the challenges for our community are evolving.
In this article I want to talk about what I consider the most significant
threat for our open source community today: the loss of our common culture.</p>
<p>Over the past year we evolved the OpenStack project model to adopt an
<a href="http://ttx.re/the-way-forward.html">inclusive approach</a>. Project teams which
work on deliverables that help us achieve the OpenStack Mission, and which
follow our development and community practices should generally be accepted
under the "big tent". As we explained in
<a href="https://www.youtube.com/watch?v=TTe_bZtEKxo">this presentation in Vancouver</a>,
we moved from asking "is this OpenStack ?" to asking "are you OpenStack ?".</p>
<p>What does it mean to <em>be OpenStack</em> ? We wrote down a
<a href="http://governance.openstack.org/reference/new-projects-requirements.html">set of principles</a>,
based on the original <a href="https://wiki.openstack.org/wiki/Open"><em>four opens</em></a>
that were defined at the very beginning of this journey. But
"being OpenStack" goes beyond that. It is to be aligned on a common goal,
be part of the same effort, be the same tribe. OpenStack relies on a number
of individuals working cross-project (on infrastructure, QA, documentation,
release processes, interoperability, user experience, API guidelines,
vulnerability management, election organization...). It is because we belong
to the same tribe that some people and organizations care enough about
"OpenStack" as a whole to dedicate time to those essential cross-project
efforts.</p>
<p>This is why we standardize on logged IRC channels as a communication medium,
why we ask that every project change goes through Gerrit, and why we should
very conservatively add new programming languages to the mix. Some people
advocate letting OpenStack project teams pick whatever language they want,
or letting them meet on that new trendy videoconferencing app, or letting
them track bugs on separate JIRA instances. More freedom sounds good at first
glance, but doing so would further fragment our community into specific silos
that all behave differently. Doing so would make recruiting for those
essential cross-project efforts even harder than it is today, while at the
same time making the work of those cross-project efforts significantly more
complex. Doing so would make our community crumble under its own weight.</p>
<p>We started this journey with a pretty strong common culture. It was mostly
oral tradition. We assumed that as OpenStack grew, our culture would
naturally be assimilated by new members. And it did, for quite some time.
But today we are at a point where we dramatically expanded our community
(we doubled the number of project teams over the last year) and our common
culture did not naturally transmit to newcomers. Silos with local traditions
have formed. Teams don't all behave in the same way anymore. Most team
members only care about a single project team. We struggle to move from one
project to another. We struggle to provide common solutions that work for
everyone. We struggle to recruit for cross-project efforts more than we ever
did. OpenStack's future as a community is at risk. It's time to hold the
culture line, rather than time to further relax it.</p>
<p>It is also more than time that we document our common culture, so that it
can be explicitly communicated to everyone in the OpenStack ecosystem
(current and prospective members). We started a workgroup at the
<a href="http://governance.openstack.org/">Technical Committee</a>, held a
<a href="https://wiki.openstack.org/wiki/VirtualSprints">virtual sprint</a> to get a
base version written, and now here it is: the first version of the
<a href="http://docs.openstack.org/project-team-guide">OpenStack Project team guide</a>.
Read it, refer to it, communicate it to your OpenStack community fellows,
propose changes to it. It is an essential tool for us to overcome this new
challenge. It's certainly not the only tool, and I hope we'll be able to
dedicate a cross-project session at the Mitaka Design Summit in Tokyo to
further discuss this topic.</p>The Age of Foundations2015-07-29T15:30:00+02:002015-07-29T15:30:00+02:00Thierry Carreztag:ttx.re,2015-07-29:/the-age-of-foundations.html<p>At OSCON last week, Google announced the creation around Kubernetes of the
Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his
keynote to the (recently-renamed) Open Container Initiative, confirming the
Linux Foundation's recent shift towards providing Foundations-as-a-Service.
Foundations ended up being the talk of the show, with some questioning …</p><p>At OSCON last week, Google announced the creation around Kubernetes of the
Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his
keynote to the (recently-renamed) Open Container Initiative, confirming the
Linux Foundation's recent shift towards providing Foundations-as-a-Service.
Foundations ended up being the talk of the show, with some questioning the
need for Foundations for everything, and others discussing the rise of
Foundations as tactical weapons.</p>
<h2>Back to the basics</h2>
<p>The main goal of open source foundations is to provide a neutral, level and
open collaboration ground around one or several open source projects. That is
what we call the <strong>upstream</strong> support goal. Projects are initially created by
individuals or companies that own the original trademark and have power to
change the governance model. That creates a tilted playing field: not all
players are equal, and some of them can even change the rules in the middle
of the game. As projects become more popular, that initial parentage becomes
a blocker for other contributors or companies to participate. If your goal is
to maximize adoption, contribution and mindshare, transferring the ownership
of the project and its governance to a more neutral body is the natural next
step. It removes barriers to contribution and truly enables open innovation.</p>
<p>Now, those foundations need basic funding, and a common way to achieve that
is to accept corporate members. That leads to the secondary goal of open source
foundations: serve as a marketing and business development engine for companies
around a common goal. That is what we call the <strong>downstream</strong> support goal.
Foundations work to build and promote a sane ecosystem around the open source
project, by organizing local and global events or supporting initiatives to
make it more usable: interoperability, training, certification, trademark
licenses...</p>
<h2>Not all Foundations are the same</h2>
<p>At this point it's important to see that a foundation is not a label, the
name doesn't come with any guarantee. All those foundations are actually very
different, and you need to read the fine print to understand their goals or
assess exactly how open they are.</p>
<p>On the upstream side, few of them actually let their open source project be
completely run by their individual contributors, with elected leadership
(one contributor = one vote, and anyone may contribute). That form of
governance is the only one that ensures that a project is really open to
individual contributors, and the only one that prevents forks due to
contributors and project owners not having aligned goals. If you restrict
leadership positions to appointed seats by corporate backers, you've created
a closed pay-to-play collaboration, not an open collaboration ground. On the
downstream side, not all of them accept individual members or give
representation to smaller companies, beyond their founding members.
Those details matter.</p>
<p>When we set up the OpenStack Foundation, we worked hard to make sure we
created a solid, independent, open and meritocratic <em>upstream</em> side. That,
in turn, enabled a pretty successful <em>downstream</em> side, set up to be inclusive
of the diversity in our ecosystem.</p>
<h2>The future</h2>
<p>I see the "Foundation" approach to open source as the only viable solution
past a given size and momentum around a project. It's certainly preferable
to "open but actually owned by one specific party" (which sooner or later
leads to forking). Open source now being the default development model in
the industry, we'll certainly see even more foundations in the future, not
less.</p>
<p>As this approach gets more prevalent, I expect a rise in more tactical
foundations that primarily exist as a trade association to push a specific
vision for the industry. At OSCON during those two presentations around
container-driven foundations, it was actually interesting to notice not the
common points, but the differences. The message was subtly different (pods
vs. containers), and the companies backing them were subtly different too.
I expect differential analysis of Foundations to become a thing.</p>
<p>My hope is that as the "Foundation" model of open source gets ubiquitous,
we make sure that we distinguish those which are primarily built to sustain
the needs or the strategy of a dozen of large corporations, and those which
are primarily built to enable open collaboration around an open source project.
The <em>downstream</em> goal should stay a secondary goal, and new foundations need
to make sure they first get the <em>upstream</em> side right.</p>
<p>In conclusion, we should certainly welcome more Foundations being created to
sustain more successful open source projects in the future. But we also need
to pause and read the fine print: assess how open they are, discover who ends
up owning their upstream open source project, and determine their primary
reason for existing.</p>New OpenStack component versioning2015-06-26T14:45:00+02:002015-06-26T14:45:00+02:00Thierry Carreztag:ttx.re,2015-06-26:/new-versioning.html<p>Yesterday we reached the liberty-1 development milestone. You may have noticed
from the <a href="http://lists.openstack.org/pipermail/openstack-announce/2015-June/000391.html">announcement</a>
that the various components released were all using new, different version
numbers. What's going on here ?</p>
<h2>Once upon a time</h2>
<p>Since the beginning of OpenStack we've been using two versioning schemes.
One was for projects released …</p><p>Yesterday we reached the liberty-1 development milestone. You may have noticed
from the <a href="http://lists.openstack.org/pipermail/openstack-announce/2015-June/000391.html">announcement</a>
that the various components released were all using new, different version
numbers. What's going on here ?</p>
<h2>Once upon a time</h2>
<p>Since the beginning of OpenStack we've been using two versioning schemes.
One was for projects released once every 6 months and following a schedule
of development milestones and release candidates. Those would be using
a YEAR.N version number (like 2015.1 for Kilo).</p>
<p>Another was used by Swift, which was already mature when OpenStack started,
and which released intermediary versions as-needed throughout the cycle. It
would use a X.Y.Z version number which looked a lot more like semantic
versioning.</p>
<p>At the end of the cycle, we would coordinate a final release that would
combine both. For example the "Kilo" release would be made of Nova 2015.1.0,
Swift 2.3.0, and everything else at 2015.1.0.</p>
<h2>Recent developments</h2>
<p>A few things happened over the last two cycles. First, we released more and
more libraries, and those would follow a strict X.Y.Z semantic versioning.
Those would also have an final release in the cycle, from which a stable
branch would be maintained for critical bugfixes and vulnerability fixes. So
the portion of commonly-versioned YEAR.N deliverables was fast decreasing.</p>
<p>Second, some projects got more mature and/or more able to release
fully-functional intermediary releases as-needed. As a community, we still
can't support more than one stable branch every 6 months, so those intermediary
releases won't get backports, but past a given maturity step, it's still a
great thing to push new features to bleeding-edge users as early and often as
we can. For those a YEAR.N synchronized versioning scheme would not work.</p>
<h2>The versioning conundrum</h2>
<p>At that stage we had three options to handle those projects switching from
one model to another. They could keep their 2015.2.0 version and start doing
semantic versioning from that -- but that would be highly confusing, when you
end up releasing 2017.9.4 sometimes in 2016. The second option was to reset
the version for projects as they switch. So Ironic would adopt, say, 3.0.0
while all other projects still use 2015.2.0.</p>
<p>The third option was to bite the bullet and drop the YEAR.N versioning at the
same time, for all the projects that were still using it. Switching
them all to some arbitrary number (say, 12.0.0 since that would be the 12th
OpenStack release) would create confusion as projects switching to
intermediary releases would slowly drift from the pack (most projects
publishing 13.0.0 while some would be at 12.5.2 and others at 13.1.0).
So to avoid that confusion, projects would pick purposefully distinct version
numbers based on their age.</p>
<h2>The change</h2>
<p>After discussions at the Vancouver Design Summit and on the mailing-list,
we opted for the third option, with an initial number calculated from the
number of past integrated releases already published.</p>
<p>It's a clean cut which will reduce on-going disruption. All components end
up with a different, meaningful version number: there are no longer "normal"
and "outliers" projects. Additionally, it solves the weird impression we had
when we released 2014.2.2 stable versions sometimes in 2015.</p>
<p>As far as impact is concerned, distributions will need to make sure to insert
an <em>epoch</em> so that package versions sort correctly in their package management
systems. If your internal CI pipeline relies on sorting version numbers, it
will likely need an adjustment too. For everyone else, it should not have an
impact: when Liberty is out, you will upgrade to the liberty version of the
components, as you always did.</p>
<h2>Liberty-1 and the future</h2>
<p>The change in versions was pushed last week, and that is why for liberty-1 we
published 12.0.0.0b1 for Nova, 8.0.0.0b1 for Keystone, and 1.0.0.0b1 for
Designate, etc... Those are still on a milestone-based 6-month release cycle,
but their "Liberty" final version won't be all versioned "2015.2.0", but 12.0.0
for Nova, 8.0.0 for Keystone, etc.</p>
<p>To reduce the confusion, the release management team will provide tooling and
web pages to describe what each series means in terms of component version
numbers (and the other way around).</p>
<p>We hope this future-proof change will bring some more freedom for OpenStack
project teams to pick the release model that is the most interesting for them
and their user base. For a cycle named "liberty", that sounded like a pretty
good time to do it.</p>OpenStack Technical Committee candidates2015-04-16T12:15:00+02:002015-04-16T12:15:00+02:00Thierry Carreztag:ttx.re,2015-04-16:/tech-committee-candidates.html<p>The election process to renew half of the OpenStack Technical Committee will
start tomorrow with candidates self-nominating to run for election. As the
chair of the existing Technical Committee (and running for reelection) I would
like to share some thoughts on what would make in my opinion good TC members …</p><p>The election process to renew half of the OpenStack Technical Committee will
start tomorrow with candidates self-nominating to run for election. As the
chair of the existing Technical Committee (and running for reelection) I would
like to share some thoughts on what would make in my opinion good TC members
for the upcoming cycle.</p>
<p>A few words on the OpenStack Technical Committee role first. The role of the
TC is to lead the "OpenStack" software development in general. Each individual
project is lead by its project team and its PTL, but the TC leads "OpenStack"
development. That includes defining the limits of what is considered "an
OpenStack project": during the Kilo cycle we introduced new rules to handle
that question, mostly based on alignment with the OpenStack Mission and
determining if the new project team shares the common values OpenStack has been
built on, behaves like an OpenStack project and therefore should be considered
part of the OpenStack community. The role of the TC also includes providing
guidance and advice to OpenStack projects, as well as driving horizontal
efforts and solving cross-project issues.</p>
<p>The Technical Committee also serves as an ultimate appeals board to solve
conflicts in our open source community. Its members are elected by all the
OpenStack contributors and are trusted to make the right call should an issue
escalate to them.</p>
<p>With all this said, what makes a good Technical Committee member ? I'd say
the first attribute of a good candidate is how skilled they are and how
respected their opinion is on the above topics. Do you trust them to make the
right call as to what should be considered part of the OpenStack community ?
Do you trust them to place "OpenStack" interest above specific projects (or
specific companies) interest ? Do you trust their open source community
experience to make the right judgment call in case issues are escalated ?</p>
<p>The other attribute is, I think, equally important. It's how much time you can
specifically dedicate to the task of being a TC member. Being a good TC member
takes a lot of time. It's not an honorific position where you happen to sit
in a meeting one hour per week and decide the fate of OpenStack. It's about
diving into proposals, look into new projects, drive cross-project initiatives,
identify long-standing issues in our community and raise them. I spend about
40% of my work time working on TC stuff, and I wish I could spend more.</p>
<p>Let's be clear: I don't think that all TC members in the past had enough time
to dedicate to Technical Committee work. They were all community leaders,
skilled and trustworthy. But a lot of them had other fulltime responsibilities
too, like being the PTL of a large OpenStack project, or being a manager in
a major company or startup. In the end that meant the past TC members were
not as active on cross-project matters as they could have been.</p>
<p>In liberty I'd like the Technical Committee to identify and propose plans
to address long-standing issues in OpenStack in general. I'd like individual
members to dive into specific projects and provide an audit of their current
health. I'd like us to make things better, rather than limit ourselves to
codifying governance. I'd like us to
<a href="http://ttx.re/stepping-out-of-the-way.html">step out of the way</a>,
and start being more useful. That takes a lot of time.</p>
<p>So as self-nomination period opens, I would like to encourage new people to
run for election. I'd like candidates to explain why they are fit for the job,
and why they think they will have a lot of time for it. And I'd like voters to
take that availability into account when they decide who to vote for.</p>Stepping out of the way2015-03-11T17:45:00+01:002015-03-11T17:45:00+01:00Thierry Carreztag:ttx.re,2015-03-11:/stepping-out-of-the-way.html<p>In the early days of OpenStack, we instituted a do-acracy: power to people
that do things over people that don't. Code talks: when unsure on the direction
to go, the one that came with code basically won. Don't ask for permission,
ask for forgiveness. Make progress, fix issues if they …</p><p>In the early days of OpenStack, we instituted a do-acracy: power to people
that do things over people that don't. Code talks: when unsure on the direction
to go, the one that came with code basically won. Don't ask for permission,
ask for forgiveness. Make progress, fix issues if they arise. That let us
positively move forward fast in the early days.</p>
<p>After some time, our community realized this didn't necessarily scale, and
this didn't necessarily result in quality. As we kept on adding contributors
way beyond our initial trusted small circle, we couldn't rely solely on our
common culture and shared understandings anymore. We started introducing
safeguards, the most visible and successful one being our code gating system
(which relies on human code reviews and automated tests to prevent unacceptable
code from merging). We started writing down our rules and governance, since
our ever-increasing group couldn't rely on oral tradition and shared experience
anymore. As picky newcomers complained about governance rules holes in every
gray area, we started defining the process to define process.</p>
<p>I think we struck the right balance on those days between allowing our
community to grow and letting people do things. But the movement to add rules
unfortunately didn't stop then, it continues today. We added specs, which if
used with moderation (like only for significant features) are a great way to
avoid wasted development effort, but used blindly are a great way to increase
the length of our feature development pipeline and increase frustration. We
added automated tests to verify the presence (or is it absence ?) of a
trailing dot in commit message titles. We started to get in the way of getting
things done.</p>
<p>At the Technical Committee level, that translates into hours lost
rubberstamping stuff, hours we don't spend asking ourselves the right
questions (like: what is broken in OpenStack today and how to fix it). Do we
really need to approve a project team decision to add a new git repository ?
Or should we just let them do things, and consider reverting that action if it
ends up causing a problem ? It is an interesting balance to strike between
letting people go wild and approving their every move. On one side you don't
want them to waste energy on something you might end up striking down, but on
another side you don't want to block them when they first set out to create
stuff.</p>
<p>As of today, I think we pushed the regulation and "ask for permission" cursor
so far we actually prevent things from happening. I'd like us to step out of
the way. When a proposal is not perfect, I'd like us to propose a subsequent
patchset instead of blocking the review forever. At the Technical Committee
level, I'd like us to let people do more things, and retreat to being an
appeals board in case problems end up arising. The Technical Committee always
has been the ultimate appeals board in case problems can't find a resolution
at lower levels of our community. In practice, the mere existence of that
appeals board encouraged conflict resolution at lower levels, to the point
where I can't remember us being called to resolve an actual dispute. I'd like
our community to be more trusted by default, to ask more for forgiveness and
less for permission.</p>
<p>Trust is a weird thing. When your behavior is constrained by rules and
automated tooling, you tend to try to game the system and get the most you
can of it. But when trust is placed on you to do the right thing, the
incentive is to avoid abusing the trust that is placed on you, to prove
yourself worthy of that trust.</p>
<p>So I'd like us (our community in general and the Technical Committee in
particular) to look into our processes and see where we can remove ourselves
from the action pipeline. Where we can trust by default and rely on our
escalation mechanisms to resolve issues as they arise (if they arise). That
may come out as weird coming from a process/governance wonk like me, but that
is my new motto for the Liberty cycle: "step out of the way".</p>The facets of the OpenStack integrated release2015-03-04T16:00:00+01:002015-03-04T16:00:00+01:00Thierry Carreztag:ttx.re,2015-03-04:/facets-of-the-integrated-release.html<p>In a recent <a href="http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/">Technical Committee update</a> on the OpenStack blog,
I explained how the OpenStack "integrated release" concept, due to its binary
nature, ended up meaning all sorts of different things to different people.
That is the main reason why we want to deconstruct its various meanings into
a set …</p><p>In a recent <a href="http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/">Technical Committee update</a> on the OpenStack blog,
I explained how the OpenStack "integrated release" concept, due to its binary
nature, ended up meaning all sorts of different things to different people.
That is the main reason why we want to deconstruct its various meanings into
a set of tags that can be independently applied to projects, in order to more
accurately describe our project ecosystem.</p>
<p>In this blogpost, I'll look into the various meanings the "integrated release"
ended up having in our history, and see how we can better convey that
information through separate tags.</p>
<h2>Released together</h2>
<p>The original meaning of "integrated release" is that all the projects in it
are released together on the same date, at the end of our development cycles.</p>
<p>I recently <a href="https://review.openstack.org/#/c/157322/">proposed</a> a tag
("release:at-6mo-cycle-end") to describe projects that commit to producing
a release at the end of our development cycles, which I think will cover
that facet.</p>
<h2>Managed release</h2>
<p>Very quickly, the "integrated release" also described projects which had their
release managed by the OpenStack Release Management team. This team sets up
processes (mostly a
<a href="https://wiki.openstack.org/wiki/FeatureFreeze">Feature Freeze</a>
deadline and a
<a href="https://wiki.openstack.org/wiki/Release_Cycle#Pre-release_.28Release_Candidates_dance.29">release candidate dance</a>)
to maximize the chances that managed projects would release on a pre-announced
date, and that none of the managed projects would end up delaying the end
release date.</p>
<p>Projects would be added to incubation and when we thought they were ready to
follow those processes without jeopardizing the "integrated release" for
existing projects, they would get added to be the next release.</p>
<p>That is a separate facet from the previous one, so I
<a href="https://review.openstack.org/#/c/157322/">proposed</a> a separate tag
("release:managed") to describe projects that happen to still be handled by
the Release Management team.</p>
<h2>Co-gating</h2>
<p>As we introduced complex code gating in OpenStack Infrastructure, the
"integrated release" concept grew another facet: it would also mean that
projects changes are tested against one another master branches. That way
we ensure that changes in project B don't break project A. This is rather
neat for a tightly-coupled set of projects. But this may be a bit overkill
for projects higher in the stack that just consume public APIs. Especially
when non-deterministic test errors in project A prevent changes from landing
in B.</p>
<p>We need to revise how to split co-gating in meaningful groups.
<a href="https://review.openstack.org/#/c/150653/">This work</a> is led by Sean Dague.
Once it is completed, I expect us to convey the information on what is
actually tested together using tags as well.</p>
<h2>Supported by OpenStack horizontal efforts</h2>
<p>From there, the "integrated release" naturally evolved to also being the set
of projects that horizontal efforts (such as Documentation, QA, stable branch
maintenance, Vulnerability management, Translations...) would focus their
work on. Being part of the "integrated release" ensured that you would get
fully supported by those horizontal teams.</p>
<p>That didn't scale that well, though. The documentation team was the first to
exhaust their limited resources, and started to move to a model where they
would not directly write all the docs for all the integrated projects. Since
then, all horizontal teams decided to gradually move to the same model, where
they would directly handle a number of projects (or none), but provide tooling,
mentoring and support for all the others.</p>
<p>It's still valuable information to know which project happens to be directly
handled by which horizontal effort, and which project ends up having security
support or basic docs. So we'll introduce tags in the future to accurately
describe this facet.</p>
<h2>The base features</h2>
<p>Going downhill, the "integrated release" started to also mean the base
features you can rely on being present when people say they deployed
"OpenStack". That was a bit at odds with the previous facets though: why would
all co-gating projects with a coordinated release necessarily be essential ?
And indeed, the integrated release grew beyond "base" features (like Keystone)
to include obviously optional projects (like Sahara or Ceilometer).</p>
<p>I personally think our users would still benefit from a description of what
layer each project belongs to: is it providing base IaaS compute functionality,
or is it relying on that being present ? This is not a consensual view though,
as some people object to all proposals leading to any type of ranking within
OpenStack projects.</p>
<h2>OpenStack</h2>
<p>At that point, the "integrated release" was "the OpenStack release", and things
outside of it were "not OpenStack".</p>
<p>This obviously led to more pressure to add more projects in it. But when the
OpenStack governance (previously under a single Project Policy Board
banner) was split between the Technical Committee and the Foundation Board,
the former retained control over the "integrated release" contents, and the
latter took control of trademark usage. This created tension over that specific
facet.</p>
<p>Defcore was created to solve this problem, by defining the criteria to apply
to various trademark programs. When asked to provide a set of projects (or
rather, as set of sections of code) to apply the trademark on, the Technical
Committee answered with the only concept it had (you guessed it, "the
integrated release" again).</p>
<p>In the tags world, when asked for a set of projects to apply a particular
trademark program to, the Technical Committee shall be able to provide a
finer-grained answer, by defining a specific tag for each question.</p>
<h2>Stability</h2>
<p>Further downhill, "integrated release" also started to mean "stable". Once
published in such a release, a project would not remove a feature without
proper discussion, notice, and deprecation period. That was yet another
facet of the now-bloated "integrated release" concept.</p>
<p>The issue is, all projects were not able to commit to the same stability
rules. One would never deprecate an existing feature, while one would rip
its API off over the course of two development cycles. One size didn't fit
all.</p>
<p>In the tags world, my view is that Ops and devs, working together, should
define various stability levels and the rules that apply for each. Then each
project can pick the tag corresponding to the stability level they can commit
to.</p>
<h2>Maturity</h2>
<p>Last but not least, at one point people started to assume that projects in
the "integrated release" were obviously mature. They were all in widespread
usage, enterprise-ready, carrier-grade, service-provider-class, web-scale.
The reality is, this facet is also complex: some projects are, and some
projects are less, and some projects aren't. So we need to
describe the various maturity styles and levels, and inform our users of each
project real status.</p>
<p>It's difficult to describe maturity objectively though. I intend to discuss
that topic with the ones that are the best placed to accurately describe it:
the OpenStack operators gathered at the Ops Summit next week.</p>The Way Forward2014-12-18T14:15:00+01:002014-12-18T14:15:00+01:00Thierry Carreztag:ttx.re,2014-12-18:/the-way-forward.html<p>The OpenStack project structure has been under heavy discussion over the
past months. There was a
<a href="http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html">long email thread</a>,
<a href="https://dague.net/2014/08/26/openstack-as-layers/">a</a>
<a href="http://inaugust.com/post/108">lot</a>
<a href="http://www.stillhq.com/openstack/kilo/000002.html">of</a>
<a href="http://ttx.re/problem-space-in-the-big-tent.html">opinionated</a>
<a href="http://www.joinfu.com/2014/09/so-what-is-the-core-of-openstack/">blogposts</a>,
a <a href="https://etherpad.openstack.org/p/kilo-crossproject-growth-challenges">cross-project design summit session</a>
in Paris, and various strawmen proposed to our
<a href="https://review.openstack.org/#/q/project:openstack/governance,n,z">governance repository</a>.
Based on all that input, the OpenStack Technical Committee worked on …</p><p>The OpenStack project structure has been under heavy discussion over the
past months. There was a
<a href="http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html">long email thread</a>,
<a href="https://dague.net/2014/08/26/openstack-as-layers/">a</a>
<a href="http://inaugust.com/post/108">lot</a>
<a href="http://www.stillhq.com/openstack/kilo/000002.html">of</a>
<a href="http://ttx.re/problem-space-in-the-big-tent.html">opinionated</a>
<a href="http://www.joinfu.com/2014/09/so-what-is-the-core-of-openstack/">blogposts</a>,
a <a href="https://etherpad.openstack.org/p/kilo-crossproject-growth-challenges">cross-project design summit session</a>
in Paris, and various strawmen proposed to our
<a href="https://review.openstack.org/#/q/project:openstack/governance,n,z">governance repository</a>.
Based on all that input, the OpenStack Technical Committee worked on a clear
specification of the problems we are trying to solve, and the proposed way
to fix them. What follows is an excerpt of the
<a href="https://review.openstack.org/#/c/138504/">approved resolution</a>.</p>
<h2>Problem description</h2>
<p>Our project structure is currently organized as a ladder. Developers form
teams, work on a project, then apply for incubation and ultimately graduate
to be part of the OpenStack integrated release. Only integrated projects
(and the few horizontal efforts necessary to build them) are recognized
officially as "OpenStack" efforts. This creates a number of issues, which
were particularly visible at the Technical Committee level over the Juno
development cycle.</p>
<p>First, the integrated release as it stands today is not a useful product for
our users. The current collection of services in the integrated release spans
from cloud native APIs (swift, zaqar in incubation), base-level IaaS blocks
(nova, glance, cinder), high-level aaS (savana, trove), and lots of things
that span domains. Some projects (swift, ironic...) can be used quite well
outside of the rest of the OpenStack stack, while others (glance, nova)
really don't function in a different context. Skilled operators aren't
deploying "the integrated release": they are picking and choosing between
components they feel are useful. New users, however, are presented with a
complex and scary "integrated release" as the thing they have to deploy and
manage: this inhibits adoption, and this inhibits the adoption of a slice of
OpenStack that could serve their need.</p>
<p>Second, the integrated release being the only and ultimate goal for projects,
there is no lack of candidates, and the list is always-growing. Why reject
Sahara if you accepted Trove? However, processes and services are applied
equally to all members of the integrated release: we gate everything in the
integrated release against everything else, we do a common, time-based
release every 6 months, we produce documentation for all the integrated
release components, etc. The resources working on those integrated horizontal
tasks are very finite, and complexity grows non-linearly as we add more
projects. So there is outside pressure to add more projects, and internal
pressure to resist further additions.</p>
<p>Third, the binary nature of the integrated release results in projects
outside the integrated release failing to get the recognition they deserve.
"Non-official" projects are second- or third-class citizens which can't get
development resources. Alternative solutions can't emerge in the shadow of
the blessed approach. Becoming part of the integrated release, which was
originally designed to be a technical decision, quickly became a
life-or-death question for new projects, and a political/community minefield.</p>
<p>In summary, the "integrated release" is paradoxically too large to be
effectively integrated, installed or upgraded in one piece, and too small to
express the diversity of our rich ecosystem. Its slow-moving, binary nature
is too granular to represent the complexity of what our community produces,
and therefore we need to reform it.</p>
<p>The challenge is to find a solution which allows to address all those three
issues. Embrace the diversity of our ecosystem while making sure that what
we produce is easily understandable and consumable by our downstream users
(distributions, deployers, end users), all that without putting more stress
on the already overworked horizontal teams providing services to all
OpenStack projects, and without limiting the current teams access to common
finite resources.</p>
<h2>Proposed change</h2>
<h3>Provide a precise taxonomy to help navigating the ecosystem</h3>
<p>We can't add any more "OpenStack" projects without dramatically revisiting
the information we provide. It is the duty of the Technical Committee to
help downstream consumers of OpenStack understand what each project means
to them, and provide them with accurate statuses for those projects.</p>
<p>Currently the landscape is very simple: you're in the integrated release, or
you're not. But since there was only one category (or badge of honor), it
ended up meaning different things to different people. From a release
management perspective, it meant what we released on the same day. From a
CI perspective, it meant what was co-gated. From an OpenStack distribution
perspective, it meant what you should be packaging. From some operator
perspective, it meant the base set of projects you should be deploying. From
some other operator perspective, it meant the set of mature, stable projects.
Those are all different things, and yet we used a single category to describe
it.</p>
<p>The first part of the change is to create a framework of tags to describe
more accurately and more objectively what each project produced in the
OpenStack community means. The Technical Committee will define tags and the
objective rules to apply them. This framework will allow us to progressively
replace the "integrated release" single badge with a richer and more nuanced
description of all "OpenStack" projects. It will allow the Technical
Committee to provide more precise answers to the Foundation Board of
Directors questions about which set of projects may make sense for a given
trademark license program. It will allow our downstream users to know which
projects are mature, which are security-supported, which are used in more
than one public cloud, or which are really massively scalable.</p>
<h3>Recognize all our community is a part of OpenStack</h3>
<p>The second part of the change is recognizing that there is more to
"OpenStack" than a finite set of projects blessed by the Technical
Committee. We already have plenty of projects that are developed on
OpenStack infrastructure, follow the OpenStack way of doing things, have
development discussions on the openstack-dev mailing-list and
use #openstack-meeting channels for their team meetings. Those are part of
the OpenStack community as well, and we propose that those should considered
"OpenStack projects" (and be hosted under openstack git namespaces), as
long as they meet an objective criteria for inclusion (to be developed as one
of the work items below). This might include items such as:</p>
<ul>
<li>
<p>They align with the OpenStack Mission: the project should help further the
OpenStack mission, by providing a cloud infrastructure service, or
directly building on an existing OpenStack infrastructure service</p>
</li>
<li>
<p>They follow the OpenStack way: open source (licensing), open community
(leadership chosen by the contributors to the project), open development
(public reviews on Gerrit, core reviewers, gate, assigned liaisons), and
open design (direction discussed at Design Summit and/or on public forums)</p>
</li>
<li>
<p>They ensure basic interoperability (API services should support at least
Keystone)</p>
</li>
<li>
<p>They submit to the OpenStack Technical Committee oversight</p>
</li>
</ul>
<p>These criteria are objective, and therefore the Technical Committee may
delegate processing applications to another team. However, the TC would
still vote to approve or reject applications itself, based on the
recommendations and input of any delegates, but without being bound to
that advice. The TC may also decide to encourage collaboration between
similar projects (to reduce unnecessary duplication of effort), or to
remove dead projects.</p>
<p>This proposed structure will replace the current program-driven structure.
We'll still track which team owns which git repository, but this will let
multiple different "OpenStack" teams potentially address the same problem
space. Contributors to projects in the OpenStack git namespace will all be
considered ATCs and participate in electing the Technical Committee.</p>
<h3>Transition</h3>
<p>As for all significant governance changes, we need to ensure a seamless
transition and reduce the effect of the reform on the current development
cycle. To ensure this seamless transition, the OpenStack taxonomy will
initially define one tag, "integrated-release", which will contain the
integrated projects for the Kilo cycle. To minimize disruption, this tag
will be used throughout the Kilo development cycle and for the Kilo end
release. This tag may be split, replaced or redefined in the future, but
that will be discussed as separate changes.</p>
<h2>Next steps</h2>
<p>I invite you to read
<a href="http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html">the full text</a>
of this Technical Committee resolution to learn more about the proposed
implementation steps or the impact on current projects.</p>
<p>It's important to note that most of the work and decisions are still ahead of
us: those proposed changes are just the base foundational step, enabling the
future evolution of our project structure and governance. Nevertheless, it
still is a significant milestone to clearly describe the issues we are
working to solve, and to agree on a clear way forward to fix those.</p>
<p>The next step now is to communicate more widely about the direction we are
going, and start the discussion on some more difficult and less consensual
details (like the exact set of objective rules applied to judge new entrants,
or the need - and clear definition for - a <em>compute-base</em> tag).</p>A great week coming up2014-10-31T14:45:00+01:002014-10-31T14:45:00+01:00Thierry Carreztag:ttx.re,2014-10-31:/a-great-week-coming-up.html<p>I'm extremely excited by the coming OpenStack Summit week in Paris !</p>
<p>First, well, Paris. It is a pretty amazing city, with great sights, museums and
restaurants. Those who arrived early should have an awesome Saturday, sun all
day, 18°C/65°F high. This is pretty rare in November, so …</p><p>I'm extremely excited by the coming OpenStack Summit week in Paris !</p>
<p>First, well, Paris. It is a pretty amazing city, with great sights, museums and
restaurants. Those who arrived early should have an awesome Saturday, sun all
day, 18°C/65°F high. This is pretty rare in November, so we are very lucky !
Then next week gets cold and wet, complete with a traditional transport
strike on Tuesday, so we don't mind being stuck all day in windowless design
summit rooms... Perfect!</p>
<p>Second, this OpenStack Summit is completely sold out <em>days before</em> the event.
That means we managed to get to a critical mass of local/regional community.
That wasn't a given: when we started working on a summit in Paris two years
ago, we were betting on the promising signs that Europe in general, and France
in particular, showed at that time. It feels like we are at the right place
at the right moment after all.</p>
<p>Third, our sponsors and party organizers seem to have outdone themselves. The
bar was set pretty high by previous events. But even if I lived in this city
for a long time, I'm very interested in participating to the pretty exclusive
events that were organized as social events for this week... In particular,
the Tuesday party at Pavillons de Bercy is, in my honest opinion, not something
anyone should miss.</p>
<p>Fourth and finally, 4 years and 10 releases in, OpenStack is at a crossroads.
There is plenty of past to reflect on, and plenty of future to discuss. There
are a number of processes and structures we need to evolve. There are a number
of changes we need to adapt to. I expect this week to be critical in shaping
our future successes: it's therefore pretty important to be around and
participate to the open discussion.</p>
<p>Shameless plug: I'll be moderating a
<a href="http://sched.co/1qeOE80">panel of old-timers</a> on Wednesday at 9am, to reflect
back on what we achieved, and think about what we failed to achieve in these
last four years. Even if you were at the party the night before, that should
give you a good reason to get up !</p>
<p>So please, all enjoy this great week coming up.</p>The problem space in the big tent2014-10-01T14:39:00+02:002014-10-01T14:39:00+02:00Thierry Carreztag:ttx.re,2014-10-01:/problem-space-in-the-big-tent.html<p>Unless you have been living under a rock, you must have noticed that the
OpenStack community is <a href="https://dague.net/2014/08/26/openstack-as-layers/">currently</a> <a href="http://inaugust.com/post/108">brainstorming</a> <a href="http://www.joinfu.com/2014/09/so-what-is-the-core-of-openstack/">potential</a> <a href="http://www.stillhq.com/openstack/kilo/000002.html">evolutions</a>
to its project and governance structure. A lot of those posts focused on
solutions and implementation details, and it's easy to design the wrong
solution when you didn't first …</p><p>Unless you have been living under a rock, you must have noticed that the
OpenStack community is <a href="https://dague.net/2014/08/26/openstack-as-layers/">currently</a> <a href="http://inaugust.com/post/108">brainstorming</a> <a href="http://www.joinfu.com/2014/09/so-what-is-the-core-of-openstack/">potential</a> <a href="http://www.stillhq.com/openstack/kilo/000002.html">evolutions</a>
to its project and governance structure. A lot of those posts focused on
solutions and implementation details, and it's easy to design the wrong
solution when you didn't first define the problem space. So in this post I'd
like to take a step back and look at which issues are we trying to solve, and
the constraints we have.</p>
<p>With the continued growth of OpenStack, the current project structure and
governance model (which you can largely blame me for) fail in various ways.</p>
<h3>The "integrated release" Grail</h3>
<p>In our current structure, the integrated release is the ultimate goal, with
being recognized as an official program and being incubated seen as the first
steps in the enlightenment ladder. That creates a number of issues.</p>
<p>First, the integrated release being the ultimate goal, there is no lack of
candidate projects, and it is always-growing. Why reject Sahara if you
accepted Trove ? However, processes and services are applied equally to all
members of the integrated release: we gate everything in the integrated
release against everything else, we do a common, time-based release every
6 months, we produce documentation for all the integrated release components,
etc. The resources working on those integrated horizontal tasks are very
finite, and complexity grows non-linearly as we add more projects. So there
is outside pressure to add more projects, and internal pressure to resist
further additions. This is obviously not sustainable.</p>
<p>Second, projects outside the integrated release fail to get the recognition
they deserve. Some companies won't invest resources to participate in a
"non-official" project. So becoming part of the integrated release, which
was designed to be a technical decision, quickly became a life-or-death
question for new projects, and a political/community minefield.</p>
<p>We need to find a model that lets us be inclusive of all "OpenStack" projects,
while preserving the resources of the horizontal teams. I think Monty's
"layer #1" (which I prefer to call <strong>Ring0</strong>) solves that issue, by defining
a user-case-driven, limited, mostly-static set that the current horizontal
teams (infra, release management, QA, Docs...) can commit to supporting
directly. All the other projects may or may not be supported directly (or
just getting tools and advice to do it themselves), depending on those
horizontal teams capacity. By making ring0 arbitrarily small and static, most
things would not be in it, so we avoid the Grail effect. Projects outside
ring0 can enjoy more freedom (with their release cycle, with their gating
choices...) than the tightly-controlled ring0. Ring0 becomes a production
artifact, not a badge of honor.</p>
<h3>Duplication, Competition and Overlap</h3>
<p>In the early days of this project, I was obsessed with avoiding duplication
of effort. In a large project like OpenStack, it's really easy for teams to
work in their (usually corporate) corner on their own invented-here solution
for a common problem. So the governance model was built to reduce the risk of
duplication of effort, by creating obviously-blessed projects, obsessively
avoiding overlap in scope, and constantly encouraging people to merge their
work and join existing teams rather than do their own thing. We could say it
was quite successful, and whoever has attended our Design Summits can see
how this cross-organization collaboration miracle happens every time.
Some take it now for granted, but it didn't happen by accident. </p>
<p>That said, there is now the unintended side-effect that we actively prevent
the emergence of better alternatives to replace existing "blessed" things.
It's currently difficult for a project to prove itself outside an official
program, in the shadow of an existing project with the "integrated" badge.</p>
<p>It's a difficult balance to strike. We still mostly want to avoid duplication
of effort and have teams cooperate and join efforts on the same code base,
since avoiding such waste is one of the big benefits of our open source/open
design/open development model. But we still want to let new flowers bloom.
I hope that removing blessed programs and the ladder to integrated will go
a long way to fix this, but I'd hate it if we ended up discouraging
collaboration as a side effect.</p>
<p>We also need to keep discouraging partially-overlapping scopes, because that
would be a disservice to our users. If a project does AB and another does CD,
there is little value to our users in a new project that would do ABC.</p>
<h3>Not answering the right questions</h3>
<p>The last issue with the current system is that it fails to answer the
questions that our downstream users (packagers, deployers, end users) rightly
ask themselves about OpenStack. A single, binary badge of honor just can't
answer all the different questions. Jay's
<a href="http://www.joinfu.com/2014/09/so-what-is-the-core-of-openstack/">post</a>
touches on that part extensively: I think developing a tag taxonomy to
document what each project in "OpenStack" can do for you, depending on what
type of user you are, is a good answer. Removing the "integrated release"
super-badge will allow that information to emerge.</p>
<h3>The limit of the tent</h3>
<p>So it's quite obvious at this point that we need to change the concept of
"integrated release", which is a lot too binary. Replacing it with a set of
flexible structures (a tag taxonomy, a ring0 that current horizontal teams in
OpenStack feel fine supporting, other groups as needed...) sounds like the
way to go. Being inclusive in what we accept in the "big tent" is also very
consensual. But even big tents have limits, lines in the sand that we draw
between what is in and what is not in. Where should be ours ? Supporters of
the big tent approach all seem to diverge on that detail.</p>
<p>There is still a marginal cost to pay for each project we add. If we decide
to precisely describe projects in the OpenStack ecosystem using a set of tags,
as Jay suggested, then it's only valuable if it's kept current, and the
maintenance cost increases (linearly) with each project addition. If those
projects are to be called "OpenStack", they may trigger a trademark search,
too. And what about Design Summit space ?</p>
<p>I'd like projects to at very least loosely align with the OpenStack mission,
otherwise we'd lose our key identity and purpose. Monty proposes an alignment
check ("are you one of us"), which mostly translates into observing a number
of key governance and technical principles. He would also focus design summit
space on "Ring0" projects. I fear that may make Ring0 artificially attractive
and recreate a Grail effect, but I don't have a better solution to suggest.</p>
<h3>The TC constituency</h3>
<p>The last constraint would be to find the right constituency for the Technical
Committee. The Technical Committee is defined in the Foundation bylaws as the
governance body in charge of the technical direction of "OpenStack" as a whole.
There is an essential symmetry, where projects willingly place themselves
under the authority of the TC, and in exchange get the right to participate
in the Technical Committee members election. We used to do that by blessing
individual projects. Currently we bless teams instead (the "Programs" concept
was created to add a level of indirection between the team we bless and the
code repositories they want to work on, to give teams more flexibility on how
they organize their code). As we change the structure, we need to find what
the new area of authority of the TC is, and therefore what its new
constituency would look like.</p>
<p>There are two approaches. You can consider that the TC only has authority
over a subset (Ring0) and supporting projects (horizontal efforts which
apply to Ring0). But that kind of mean other projects are not really
"OpenStack". The other approach is to consider that all the "big tent" is
under the authority of the Technical Committee, and contributing to any
project in the big tent gives you a right to vote in TC membership elections.
I'm leaning toward the latter option, even if that will change the dynamics
of the TC and may require stronger alignment checks ("are you one of us")
before entry.</p>
<h3>Conclusion</h3>
<p>As we move on and prepare more detailed proposals, I'd like to make sure all
the proposed solutions have an answer for those various questions:</p>
<ol>
<li>How do we solve the integrated gate bottleneck ?</li>
<li>How do we solve the horizontal teams scaling issues ?</li>
<li>How do we prevent having a single Grail, to move the TC away from its
current badge-granting authority role ?</li>
<li>How do we allow competition and alternative solutions ?</li>
<li>How do we continue to encourage collaboration and avoid duplication of
effort ?</li>
<li>How do we keep on discouraging partial scope overlap ?</li>
<li>How do we provide the answers about our projects that our
packagers/deployers/endusers are needing ?</li>
<li>If a "big tent" approach is proposed, where does the big tent end ?</li>
<li>What would the Design Summit look like in the new world order ?</li>
<li>What would the TC area of authority (and therefore constituency) be in
the new world order ?</li>
</ol>
<p>Now, brainstorm on.</p>#1 OpenStack contributor: all of us2014-09-29T15:00:00+02:002014-09-29T15:00:00+02:00Thierry Carreztag:ttx.re,2014-09-29:/largest-openstack-contributor.html<p>It's that time of the year again... As we get to the end of our development
cycle, some people look at hard contributions numbers and some companies
decide to expose questionable metrics to brag about being #1.</p>
<p>This cycle, <a href="http://www.networkworld.com/article/2686462/public-cloud/hp-leapfrogs-red-hat-to-become-top-contributor-to-openstack.html">it's</a> <a href="http://www.theinquirer.net/inquirer/news/2371470/hp-is-now-the-biggest-contributor-to-openstack-juno">apparently</a> <a href="https://twitter.com/HPCloudAngel/status/511913478362103810">HP's</a> <a href="https://twitter.com/HPCTOMartinFink/status/512288936819818496">turn</a>.
But you can make numbers say anything …</p><p>It's that time of the year again... As we get to the end of our development
cycle, some people look at hard contributions numbers and some companies
decide to expose questionable metrics to brag about being #1.</p>
<p>This cycle, <a href="http://www.networkworld.com/article/2686462/public-cloud/hp-leapfrogs-red-hat-to-become-top-contributor-to-openstack.html">it's</a> <a href="http://www.theinquirer.net/inquirer/news/2371470/hp-is-now-the-biggest-contributor-to-openstack-juno">apparently</a> <a href="https://twitter.com/HPCloudAngel/status/511913478362103810">HP's</a> <a href="https://twitter.com/HPCTOMartinFink/status/512288936819818496">turn</a>.
But you can make numbers say anything... It's true that HP is the largest
contributor when you consider the integrated release, the incubated projects
and the supporting projects altogether, and we are very happy with HP's
involvement in OpenStack. But numbers can always tell a different story
if you look at them from a slightly different angle. If you only consider
commits to the integrated release (the projects that we will end up releasing
as "Juno" in a few weeks), then Red Hat is still #1. If you count code reviews
on this same integrated release, then Mirantis leads. If you only consider
Documentation, then SUSE leads. It's all pretty balanced and healthy.</p>
<p>Rather than celebrating being #1, we should all celebrate the diversity of our
community and our contributors. 132 different organizations were involved.
6% of our code contributions come from individuals who are not affiliated
with any company or organization. The
<a href="http://www.ufcg.edu.br/">Universidade Federal de Campina Grande</a>
is #25 in commits to OpenStack projects.
<a href="https://www.b1-systems.de/">B1 systems</a>, a small German consulting
firm, is #8, mostly through the work of one person.</p>
<p>Every little bit counts. It doesn't matter who is #1, it matters that we all
<em>can</em> contribute, and that we all <em>do</em> contribute.
It matters that we keep on making sure <em>everyone</em> can easily
contribute. That's what's really important, and I wish we all were celebrating
that.</p>The Kilo Design Summit in Paris2014-09-17T16:45:00+02:002014-09-17T16:45:00+02:00Thierry Carreztag:ttx.re,2014-09-17:/kilo-design-summit.html<p>In less than two months the OpenStack development community will gather in
Paris to discuss the details of the Kilo development cycle. It starts after the
keynotes on the Tuesday of the summit week, and ends at the end of the day on
the Friday. We decided a number of …</p><p>In less than two months the OpenStack development community will gather in
Paris to discuss the details of the Kilo development cycle. It starts after the
keynotes on the Tuesday of the summit week, and ends at the end of the day on
the Friday. We decided a number of changes in the Design Summit organization in
order to make it an even more productive time for all of us.</p>
<h2>Tuesday</h2>
<p>On the Tuesday we'll have <strong>cross-project workshops</strong>. These are 40-min or
90-min sessions on issues that span multiple programs, and where we try to
reach alignment and exploit that little window in time where all projects are
in the same place. These were very successful in Atlanta, so we'll do them
again. We started brainstorming potential topics on
<a href="https://etherpad.openstack.org/p/kilo-crossproject-summit-topics">this etherpad</a>.</p>
<p>On the same day there will be time for scheduled sessions for incubating
projects, as well as some sessions for other OpenStack projects. I'll post
details on how to participate to that on the
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">openstack-dev mailing-list </a> soon.</p>
<h2>Wednesday and Thursday</h2>
<p>On Wednesday and Thursday we'll have our usual <strong>scheduled sessions</strong>. Those
are 40-min sessions on a specific theme, and they appear on the Design Summit
schedule. The idea is to use those few visible time slots to gather feedback on
issues that need a lot of external input, but also to address some obvious
elephants in the room.</p>
<h2>Friday</h2>
<p>On the Friday we'll have <strong>contributors meetups</strong>. All programs will have
a space for a half-day or a full-day of informal meetup with an open agenda.
The idea is to use that time to get alignment between core contributors on the
cycle objectives, to solve project-internal issues, or have further discussions
on a specific effort. The format is very much like the mid-cycle meetups.</p>
<h2>Program pods</h2>
<p>Program pods are roundtables contributors can informally gather around to
continue a discussion when they don't have scheduled space at the same time. In
Paris we have limited space, so there will be a limited number of program pods.
Incubating programs will have a designated pod, but other programs will have to
share the remaining available space. Hopefully it won't be that much of a
musical-chairs issue, thanks to the contributors meetup day on Friday.</p>
<h2>Planning the summit</h2>
<p>We decided to abandon the formal <em>session suggestion</em> system in favor of a more
collaborative and open approach. Topics are suggested on an open document (see
<a href="http://wiki.openstack.org/wiki/Summit/Planning">the list here</a>), discussed and
deduplicated at IRC team meetings, and finally split between scheduled session
slots and the contributors meetup agenda. This is done under the guidance of
the Kilo PTLs, which we'll be electing soon. Reminders will be posted on the
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">openstack-dev</a> list.</p>
<p>So I invite everyone to join those documents and the future team meetings
discussing the summit agenda. Let's all make this Design Summit great !</p>The next steps for Zaqar2014-09-17T15:00:00+02:002014-09-17T15:00:00+02:00Thierry Carreztag:ttx.re,2014-09-17:/next-steps-for-zaqar.html<p>Yesterday the
<a href="https://wiki.openstack.org/wiki/Governance/TechnicalCommittee">OpenStack Technical Committee</a>
concluded the end-of-cycle graduation review for
<a href="https://wiki.openstack.org/wiki/Zaqar">Zaqar</a>
(the project previously called Marconi), and decided Zaqar should stay in
incubation during the Kilo cycle.</p>
<h2>What it does not mean</h2>
<p>It does not mean that the Zaqar developers failed to deliver. The Zaqar team
crossed all the …</p><p>Yesterday the
<a href="https://wiki.openstack.org/wiki/Governance/TechnicalCommittee">OpenStack Technical Committee</a>
concluded the end-of-cycle graduation review for
<a href="https://wiki.openstack.org/wiki/Zaqar">Zaqar</a>
(the project previously called Marconi), and decided Zaqar should stay in
incubation during the Kilo cycle.</p>
<h2>What it does not mean</h2>
<p>It does not mean that the Zaqar developers failed to deliver. The Zaqar team
crossed all the checkboxes in our
<a href="http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements.rst">integration requirements</a>
checklist. They consistently took into account our feedback and proved
themselves ready to make any required change. This is why the decision appears
so unfair to them: they could not really do more.</p>
<p>It does not mean that OpenStack wouldn't benefit from a basic queue/messaging
service. If anything, it means that we actually care a lot about such a
service, enough to have strong opinions about how it should be done, the scope
of the designed solution and what exact use cases it enables or prevents.</p>
<h2>What it does mean</h2>
<p>It does mean that our incubation process is broken. The Technical Committee
could not provide to the Zaqar team the feedback they needed to be successful.
It could not discuss Zaqar's scope and design well before the end-of-cycle
graduation review, when time constraints push for a quick decision. In this
cycle the TC focused on existing issues with existing integrated projects (like
the Neutron/nova-network gaps), which some might argue need to be fixed before
we add more integrated projects. We ended up not having the time to follow up
on incubated projects as much as we should have.</p>
<p>The good news here is that the gap coverage for existing integrated projects
during the Juno cycle was generally successful, so this was probably more of a
one-time thing. If we preserve the incubation process as-is, next cycle we plan
to assign a specific TC member mentor to every incubated project, so that we
provide consistent feedback and raise the necessary discussions in due time.</p>
<p>It does also mean that our "in or out" integrated release model is broken.
Having a single class you can graduate to means accepting a new integrated
project has a significant cost on existing horizontal teams. The integrated
release is therefore very conservative -- if there is any doubt, better leave
the incubated project to resolve those doubts while in incubation, rather than
in the first integrated cycle with the time pressure of the next release date.
And if it's not "in", it's just "out". There is no intermediary area. If we had
several classes (or layers), triggering different amounts of resources, then it
would be much less black or white and we could have a conservative class and an
inclusive class. We are just starting a discussion at the TC level on how we
could evolve that model (and the "programs" that back that model) to allow us
to scale.</p>
<h2>The next steps</h2>
<p>What's up next for Zaqar, then ? It is the duty of the Technical Committee to
explain its decision, and work out under which conditions it would accept Zaqar
in 6 months time. So now that we are free from the time pressure of the
integration decision, we need to determine which options are on the table and
which ones would be preferred. This should happen in the coming weeks, before
the Design Summit in Paris. Then we really need to follow-up on that feedback,
by designating a member from the newly-elected TC as the Zaqar mentor, tasked
with making sure the communication lines stay open at all times and we stay in
alignment. We need to learn from our mistakes, and not find ourselves in the
same position in 6 months.</p>
<p>Finally, depending on the outcome of the integrated release model evolution
discussion, it is possible that the decision to stay in incubation during the
Kilo cycle will not matter that much. If we set up a layered model for example,
it is very possible that Zaqar would directly appear on that map. So my advice
would be to follow closely how that discussion goes to see how it may affect
Zaqar in the near future.</p>Analysis of April 2014 TC election2014-06-06T12:40:00+02:002014-06-06T12:40:00+02:00Thierry Carreztag:ttx.re,2014-06-06:/analysis-of-april-2014-tc-election.html<p>Some people asked me to analyze the results of the recent TC election in
<a href="https://ttx.re/an-analysis-of-the-technical-committee-election.html">the same way I ran the analysis for the previous
one</a>.
I finally found the time to do it, here are the results.</p>
<p>The April 2014 election was set up to renew 7 of the 13 …</p><p>Some people asked me to analyze the results of the recent TC election in
<a href="https://ttx.re/an-analysis-of-the-technical-committee-election.html">the same way I ran the analysis for the previous
one</a>.
I finally found the time to do it, here are the results.</p>
<p>The April 2014 election was set up to renew 7 of the 13 TC members. We
had 17 candidates, you can find the official results of the election
<a href="http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d34934c9fd1f6282">here</a>.</p>
<h2>Condorcet spread</h2>
<p>This graph shows how each candidate was ranked. The first bubble on the
left represents the number of people that have placed that candidate as
their only first choice. The last bubble on the right represents the
number of people that have placed this candidate as their only last
choice. If multiple candidates are ranked at the same level, we average
their "score".</p>
<p><img alt="graph" src="https://ttx.re/images/tc-s2014.png"></p>
<p>We can see that once again the Condorcet algorithm preferred a
consensual candidate (Devananda) over less-consensual ones (Julien,
Sergey). It's also interesting to compare the spread between Michael,
Jim and Mark.</p>
<h2>Proportional Condorcet</h2>
<p>At the previous election, running the same ballot with CIVS so-called
"proportional mode" option altered the result. This time, the
proportional mode returns <a href="http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_961ededa1fca8e03">the exact same set of
winners</a>.</p>
<h2>Partisan voting</h2>
<p>The goal of this analysis is to detect blocks of voters, who
consistently place a set of candidates above anyone else. I slightly
modified my script to reveal the most popular pairs: calculate how many
people place the same two people above anyone else, and try to detect
bias in the most popular pairs.</p>
<p>The most popular pair was Vish/Thierry (with 5.58% of voters placing us
above anyone else). We could call that the <em>old-timers</em> party. The
second most popular pair was Jay/Sergey (the <em>Mirantis</em> party) with
5.13%. 4.24% of the voters placed the Julien/Thierry pair on top of
anything else: that could reveal an influential <em>French</em> party ! The
<em>Foundation</em> party pair (Jim/Thierry) was preferred by 3.34%, same
result as the <em>Foodie</em> party (Mark/Thierry). The Jay/Vish pair was
preferred by 3.12% of the voters (a variant of the <em>old-timers</em> party).
It's also worth noting that 2.45% of the voters favored the
Flavio/Steven pair (best score of the <em>RedHat</em> pairs), 1.56% favored
Michael/JohnG (best score of the <em>Rackspace</em> pairs) and 1.11% favored
Joe/Devananda (best score of the <em>HP</em> pairs).</p>
<p>While this analysis shows some corporate bias in the vote, it's worth
noting that (a) it's extremely limited, (b) it's actually decreasing
from the previous election, (c) it did not affect the result in any
significant way and (d) it's comparable to other bias (old timers,
French people).</p>
<h2>Run your own analysis</h2>
<p>That's all I had to share. But don't blindly trust me, you can run your
own analysis by downloading the <a href="http://civs.cs.cornell.edu/cgi-bin/download_ballots.pl?id=E_d34934c9fd1f6282">anonymized
ballot</a>
!</p>F/OSS project governance models2014-04-23T13:39:00+02:002014-04-23T13:39:00+02:00Thierry Carreztag:ttx.re,2014-04-23:/foss-project-governance-models.html<p>Various governance models exist for free and open source software
projects. Most of those happen naturally, some of them are chosen...
Which one is the best ? Is there a best ? How could we judge the best ?
Like any ecosystem, I'd postulate that F/OSS project communities should
have long-term survival …</p><p>Various governance models exist for free and open source software
projects. Most of those happen naturally, some of them are chosen...
Which one is the best ? Is there a best ? How could we judge the best ?
Like any ecosystem, I'd postulate that F/OSS project communities should
have long-term survival as their main goal: the ability to continue
operation as the same community over time, without fracture of fork.</p>
<h3>Dictatorship</h3>
<p>The "benevolent dictator for life" model usually happens naturally. The
project is often originally the brain child of a single, talented
individual, who retains final say over everything that happens to
<em>their</em> project. This usually works very well: the person is naturally
respected in the community. It also naturally allows for opinionated
design, and people who sign up to the project can't ignore what they
sign up for.</p>
<p>The main issue with that setup is that it's not replicable, it can't be
dictated. It either happens naturally, or it will never happen. You
don't choose someone to become your dictator-for-life after the fact.
Any attempt to do so would fail to get enough legitimacy and natural
respect to make it last. The second issue with that setup is that it's
not durable. If the dictator stops being active in the community, their
opinion is not as much respected anymore (especially by new
contributors), which usually triggers a painful fork or governance model
switch (that's what happened in Gentoo). Even in the rare cases where
the original dictator manages to retain interest and respect in the
project, it's inherently brittle: the "natural" dictator can't really be
replaced in case something bad happens. Succession is always dirty. So
from a long-term survival standpoint, this model is not that great.</p>
<h3>Aristocracy</h3>
<p>Aristocracy is used to solve the perceived drawbacks of the
dictator-for-life model. Instead of focusing on one person, let's have a
group of people in control of the project, and let that group
self-select successors in the wider pool of contributors. That's the
role of "committers" in certain projects, and it's also how Apache
project management committees (PMCs) usually work. It also works quite
well, with self-selection usually ensuring that the members share enough
common culture to reach consensus on most decisions.</p>
<p>The drawback here is obviously the self-selection bias. Aristocracies
all fall after getting more and more disconnected from the people they
control, and revolution happens. Open source aristocracies are no
different: they fall after gradually growing disconnected from their
project contributors base. Whenever contributors to an open source
project feel like their leaders are no longer representative of the
contributors or relevant to the present of the project, this disconnect
happens. In mild cases, people just go contribute somewhere else, and in
difficult cases this usually triggers a fork.</p>
<h3>Direct democracy / Anarchy</h3>
<p>The obvious way to solve that disconnect is to give the power directly
to the contributors. Direct democracy projects give ultimate power to
all the contributors. Anarchy projects let contributors do whatever they
want. Debian is an interesting mix of the two: developers vote on
general resolutions, but maintainers also have a lot of control on their
packages.</p>
<p>While these models have a certain appeal, those projects usually have a
hard time taking necessary decisions that affect the whole project, so
they tend to linger not taking any critical decision. It's also a model
that is difficult to evolve: when you try to add new layers on top of
it, they are never really accepted by the contributors base.</p>
<h3>Representative democracy</h3>
<p>That leaves us with representative democracy. You regularly designate a
small group of people and trust them to make the right decisions for the
governance of the project. It can happen in cases where there was no
natural dictator at the beginning of the project. It's different from
aristocracy in that they are chosen by the contributors base and
regularly renewed -- ensuring that they are always seen as a fair
representation of the contributors to the project. It's more efficient
than direct democracy or anarchy in making clear and opinionated
decisions.</p>
<p>Now it's far from perfect. As Churchill famously said, <em>it's the worst
form of government, except all those other forms that have been tried
from time to time</em>. It also only works if the elected people are seen as
legitimate and representative, so it requires good participation levels
in elections. So here is my plea: the OpenStack Technical Committee,
which oversees the development of the OpenStack open source project as a
whole, is being partially renewed this week. If you're an OpenStack
contributor, please vote: this will ensure that elected people have the
legitimacy necessary for making the decisions that need to be made, and
increase the health of the project.</p>Upcoming changes to Design Summit format2014-03-20T14:59:00+01:002014-03-20T14:59:00+01:00Thierry Carreztag:ttx.re,2014-03-20:/upcoming-changes-to-design-summit-format.html<p>Since the very beginning of OpenStack we fulfilled our <a href="https://wiki.openstack.org/wiki/Open#Open_Design">Open
Design</a> promise by
organizing a developer gathering open to all OpenStack contributors at
the beginning of all our development cycles, called the <a href="https://wiki.openstack.org/wiki/Summit">Design
Summit</a>. Those events have
proven to be an essential part of OpenStack success and growth.</p>
<p>Design Summits …</p><p>Since the very beginning of OpenStack we fulfilled our <a href="https://wiki.openstack.org/wiki/Open#Open_Design">Open
Design</a> promise by
organizing a developer gathering open to all OpenStack contributors at
the beginning of all our development cycles, called the <a href="https://wiki.openstack.org/wiki/Summit">Design
Summit</a>. Those events have
proven to be an essential part of OpenStack success and growth.</p>
<p>Design Summits are a set of discussion sessions under the auspices of a
given OpenStack program. There are no formal presentations or speakers,
just open discussions around a given development theme. The elected
Program Technical Leads are responsible for picking a set of discussion
topics, and they take suggestions from the rest of the community on our
session suggestion website at
<a href="http://summit.openstack.org/">summit.openstack.org</a>.</p>
<h2>Improvements</h2>
<p>One of the last sessions at the Icehouse Design Summit in Hong-Kong was
about the Design Summit format, and how we should improve on it. Several
issues were reported during that session, like:</p>
<ul>
<li>the inability for technical people to attend (or present at) the
rest of the OpenStack Summit</li>
<li>the (in)visibility of the <em>Unconference</em> track, making it difficult
for nascent projects to attract interested contributors</li>
<li>the difficulty to have cross-project discussions in a schedule
strictly organized around project-specific topics</li>
<li>the difficulty for incubated projects to continue to collaborate
during the week, outside of the limited scheduled slots allocated to
them</li>
</ul>
<p>I'm happy to report that we acted on that feedback and will implement a
number of changes in the upcoming Juno Design Summit in Atlanta in May.</p>
<p>First, we started <strong>staggering</strong> the Design Summit from the rest of the
OpenStack Summit. The main event starts on Monday and ends on Thursday,
while the Juno Design Summit starts on Tuesday and ends on Friday. This
should allow our key technical assets to attend <em>some</em> of the conference
and maybe present there.</p>
<p>Second, the <em>Unconference</em> track is abandoned. It will be replaced by
several initiatives. Part of the <em>Unconference</em> was traditionally used
by open source projects related to OpenStack to present themselves and
recruit contributors. We'll have an <strong>Other projects</strong> track at the
Design Summit to cover for those. This will be limited to one session
per project, but those will appear on the official schedule. If you are
an open source project related to OpenStack and would like one of those
slots, please head to the <a href="http://summit.openstack.org/">session suggestion
site</a> !</p>
<p>Another classic use of the <em>Unconference</em> was ad-hoc continuation of
discussions that started in scheduled sessions, or coverage of lesser
topics that couldn't find a place in scheduled sessions. To cover for
that, we'll set up a roundtable for each program (complete with
paperboard) to serve as a rallying point for contributors around that
program. This designated space (codenamed <strong>project pod</strong>) can be used
to have additional discussions and continue collaboration outside of the
limited scheduled sessions.</p>
<p>Last but not least, we'll dedicate the first day of the summit (Tuesday)
to <strong>cross-project workshops</strong>. During that day, no other integrated
project sessions will be running, which should facilitate presence of
key stakeholders. We'll be able to discuss OpenStack-wide goals,
convergence, integration and other cross-project issues.</p>
<p>We hope that those changes will let us make the most of those 4 days all
together. At the end of the Juno Design Summit we'll discuss whether
those changes were an improvement and whether we should do them again
for the K design summit in Paris in November. See you all in Atlanta in
8 weeks!</p>Why we do Feature freeze2014-03-06T09:15:00+01:002014-03-06T09:15:00+01:00Thierry Carreztag:ttx.re,2014-03-06:/why-we-do-feature-freeze.html<p>Yesterday we entered the <a href="https://wiki.openstack.org/wiki/Icehouse_Release_Schedule">Icehouse development
cycle</a>
Feature Freeze. But with the incredible growth of the OpenStack
development community (508 different contributors over the last 30 days,
including 101 new ones !), I hear a lot of questions about it. I've
explained it on various forums in the past, but I …</p><p>Yesterday we entered the <a href="https://wiki.openstack.org/wiki/Icehouse_Release_Schedule">Icehouse development
cycle</a>
Feature Freeze. But with the incredible growth of the OpenStack
development community (508 different contributors over the last 30 days,
including 101 new ones !), I hear a lot of questions about it. I've
explained it on various forums in the past, but I figured it couldn't
hurt to write something a bit more definitive about it.</p>
<h2>Why</h2>
<p>Those are valid questions. Why freeze features ? That sounds very
anti-agile. Isn't our test-centric development model supposed to protect
us from regressions anyway ? Let's start with what feature freeze is
not. Feature freeze should only affect the integrated OpenStack release.
If you don't release (i.e. if you don't special-case certain moments in
the development), then feature freezing makes little sense. It's also
not a way to punish people who failed to meet a deadline. There are
multiple reasons that a given feature will miss a deadline, and most of
those are not the fault of the original author of the feature. We do
time-based releases, so some features and some developers will
necessarily be caught on the wrong side of the fence at some point and
need to wait for the next boat. It's an artifact of open innovation
projects.</p>
<p>Feature freeze (also known as "FF") is, obviously, about stopping adding
new features. You may think of it as artificially blocking your
progress, but this has a different effect on other people:</p>
<ul>
<li>
<p>As was evidenced by the Icehouse cycle, good <strong>code reviewers</strong> are
a scarce resource. The first effect of feature freeze is that it
limits the quantity of code reviews and make them all about
bugfixes. This lets reviewers concentrate on getting as many
bugfixes in as possible before the "release". It also helps
<strong>developers</strong> spend time on bugfixes. As long as they can work on
features, their natural inclination (or their employer orders) might
conflict with the project interest at this time in the cycle, which
is to make that point in time we call the "release" as bug-free as
possible.</p>
</li>
<li>
<p>From a <strong>QA</strong> perspective, stopping the addition of features means
you can spend useful time testing "in real life" how OpenStack
behaves. There is only so much our automated testing will catch. And
it's highly frustrating to spend time testing software that
constantly changes under you.</p>
</li>
<li>
<p>QA is not the only group that needs to catch up. For the
<strong>documentation</strong> team, the <strong>I18N</strong> team, feature freeze is
essential. It's difficult to write documentation if you don't know
what will be in the end product. It's frustrating to translate
strings that are removed or changed the next day.</p>
</li>
<li>
<p>And then you have all the downstream consumers of the release that
can use time to prepare it. <strong>Packagers</strong> need software that doesn't
constantly change and add dependencies, so that they can prepare
packages for OpenStack projects that are released as close to our
release date as possible. The <strong>marketing team</strong> needs time to look
into what was produced over the cycle and arrange it in key messages
to communicate to the outside world at release time.</p>
</li>
<li>
<p>Finally, for <strong>release management</strong>, feature freeze is a tool to
reduce risk. The end goal is to avoid introducing an embarassing
regression just before release. By gradually limiting the impact of
what we accept in the release branch (using feature freeze, but also
using the <a href="https://wiki.openstack.org/wiki/ReleaseCycle#Pre-release_.28Release_Candidates_dance.29">RC
dance</a>
that comes next), we try our best to prevent that.</p>
</li>
</ul>
<h2>Exceptions</h2>
<p>For all these groups, it's critical that we stop adding features,
changing behavior, adding new configuration options, or changing
translatable strings as early as possible. Of course, it's a trade-off.
There might be things that are essential to the success of the release,
or things that are obviously risk-limited. That's why we have an
exception process: the Feature Freeze exceptions ("FFEs").</p>
<p>Feature freeze exceptions may be granted by the PTL (with the friendly
but strong advice from the release management team). The idea is to
weigh the raw benefit of having that feature <strong>in</strong> the release, against
the complexity of the code that is being brought in, its risk of causing
a regression, and how deep we are in feature freeze already. A
self-contained change that is ready to merge a few days after feature
freeze is a lot more likely to get an exception than a refactoring of a
key layer that still needs some significant work to land. It also
depends on how many exceptions were already granted on that project,
because at some point adding anything more just causes too much
disruption.</p>
<p>It's a difficult call to make, and the release management team is here
to help the PTLs make it. If your feature gets denied, don't take it
personally. As you saw there are a large number of factors involved. Our
common goal is to raise the quality of the end release, and <em>every</em>
feature freeze exception we grant is a step <em>away</em> from that. We just
can't take that many steps back and still guaranteeing we'll win the
race.</p>The dilemma of open innovation2014-02-24T14:44:00+01:002014-02-24T14:44:00+01:00Thierry Carreztag:ttx.re,2014-02-24:/the-dilemma-of-open-innovation.html<h2>Open innovation vs. proprietary innovation</h2>
<p>For companies, there are two ways to develop open source projects. The
first one is to keep design and innovation inside your corporate
borders, and only accept peripheral contributions. In that case you
produce open source software, but everything else resembles traditional
software development: you …</p><h2>Open innovation vs. proprietary innovation</h2>
<p>For companies, there are two ways to develop open source projects. The
first one is to keep design and innovation inside your corporate
borders, and only accept peripheral contributions. In that case you
produce open source software, but everything else resembles traditional
software development: you set the goals and roadmap for your product,
and organize your development activity to meet those goals, using Agile
or waterfall methodologies.</p>
<p>The second one is what we call <em>open innovation</em>: build a common and
level playing field for contributions from anywhere, under the auspices
of an independent body (foundation or other). In that case you don't
really have a roadmap: what ends up in the software is what the
contributors manage to push through a maintainers trust tree (think: the
Linux kernel) or a drastic code review / CI gate (think: OpenStack).
Products or services are generally built on top of those projects and
let the various participants differentiate on top of the common
platform.</p>
<p>Now, while I heavily prefer the second option (which I find much closer
to the ideals of free software), I recognize that both options are valid
and both are open source. The first one ends up attracting far less
contributions, but it works quite well for niche, specialized products
that require some specific know-how and where focused product design
gives you an edge. But the second works better to reach universal
adoption and total world domination.</p>
<h2>A tragedy of the commons</h2>
<p>The dilemma of open innovation is that it's a natural tragedy of the
commons. You need strategic contributions to keep the project afloat:
people working on project infrastructure, QA, security response,
documentation, bugfixing, release management which do not directly
contribute to your employer baseline as much as a tactical contribution
(like a driver to interface with your hardware) would. Some companies
contribute those necessary resources, while some others just get the
benefits of monetizing products or services on top of the platform
without contributing their fair share. The risk, of course, is that the
strategic contributor gets tired of paying for the free rider.</p>
<p>Open innovation is a living ecosystem, a society. Like all societies, it
has its parasites, its defectors, those which don't live by the rules.
And like all societies, it actually <em>needs</em> a certain amount of
defectors, as it makes the society stronger and more able to evolve. The
trick is to keep the amount of parasites down to a tolerable level. In
our world, this is usually done by increasing the difficulty or the cost
of defecting, while reducing the drawbacks or the cost of cooperating.</p>
<h2>Keeping our society healthy</h2>
<p>In his book <em>Liars and Outliers</em>, Bruce Schneier details the various
knobs a society can play with to adjust the number of defectors. There
are moral pressures, reputational pressures, institutional pressures and
security pressures. In open innovation projects, moral pressures and
security pressures don't work that well, so we usually use a combination
of institutional pressures (licensing, trademark rules) and reputational
pressures (praising contributors, shaming free riders) to keep defectors
to an acceptable level.</p>
<p>Those are challenges that are fully understood and regularly applied in
the Linux kernel project. For OpenStack, the meteoritic growth of the
project (and the expertise land-grab that came with it) protected us
from the effects of the open innovation dilemma so far. But the
Technical Committee shall keep an eye on this dilemma and be ready to
adjust the knobs if it starts becoming more of a problem. Because at
some point, it will.</p>StoryBoard sprint in Brussels2014-02-05T15:20:00+01:002014-02-05T15:20:00+01:00Thierry Carreztag:ttx.re,2014-02-05:/storyboard-sprint-in-brussels.html<p>StoryBoard is a project I started a few months ago. We have been running
into a number of issues with Launchpad (inability to have blueprints
spanning multiple code bases, inability to have flexible project group
views, inability to use non-Launchpad OpenID for login...), and were
investigating replacements. I was tired …</p><p>StoryBoard is a project I started a few months ago. We have been running
into a number of issues with Launchpad (inability to have blueprints
spanning multiple code bases, inability to have flexible project group
views, inability to use non-Launchpad OpenID for login...), and were
investigating replacements. I was tired to explain why those
alternatives wouldn't work for our task tracking, so I started to
describe the features we needed, and ended up writing a proof-of-concept
to show a practical example.</p>
<p><img alt="storyboard-poc" src="https://ttx.re/images/storyboard-old.png"></p>
<p>That proof-of-concept was sufficiently compelling that the
Infrastructure team decided we should follow the path of writing our own
tool. To be useful, task tracking for complex projects has to precisely
match your workflow. And the POC proved that it wasn't particularly
difficult to write. Then people from HP, Mirantis and RedHat joined this
effort.</p>
<p>My Django-based proof-of-concept had a definite last-century feel to it,
though. We wanted a complete REST API to cover automation and scripting
needs, and multiple clients on top of that. Time was ripe for doing
things properly and start building a team effort around this. Time was
ripe for... the StoryBoard sprint.</p>
<p>We gathered in Brussels for two days in advance of FOSDEM, in a meeting
room sponsored by the OpenStack Foundation (thanks!). On day 2 we were
12 people in the room, which was more than we expected !</p>
<p><img alt="sprint" src="https://ttx.re/images/storyboard-sprint.jpg"></p>
<p>Colette helped us craft a mission statement and structure our goals.
Michael presented an architecture (static JS client on top of
OpenStack-like REST service) that we blessed. Jaromir started to draw
wireframes. Sergey, Ruslan and Nikita fought uncooperative consulates
and traveled at night to be present on day 2. We also confirmed a number
of other technology choices (Bootstrap, AngularJS...). We discussed the
basic model, bikeshedded over StoryBoard vs. Storyboard and service
URLs. We got a lot covered, had very few breaks, ate nice food and drank
nice beer. But more importantly, we built a strong set of shared
understandings which should help us make progress as a united team going
forward.</p>
<p>We have automated testing and continuous deployment set up now, and once
the initial basic functionality is up (MVP0) we should iterate fast. The
Infrastructure program is expected to be the first to dogfood this, and
the goal is to have something interesting to present to other programs
by the Atlanta summit. To participate or learn more about StoryBoard,
please join us on #storyboard on Freenode IRC, or at our <a href="https://wiki.openstack.org/wiki/Meetings#StoryBoard_Meeting">weekly
meeting</a>.</p>Icehouse-2 velocity analysis2014-02-03T16:08:00+01:002014-02-03T16:08:00+01:00Thierry Carreztag:ttx.re,2014-02-03:/icehouse-2-velocity-analysis.html<p>Looking at our recently-concluded <em>icehouse-2</em> development timeframe, we
landed far less features and bugfixes than we wanted and expected. That
created concerns about us losing our velocity, so I run a little
analysis to confirm or deny that feeling.</p>
<h3>Velocity loss ?</h3>
<p>If we compare <em>icehouse</em> to the <em>havana</em> cycle and …</p><p>Looking at our recently-concluded <em>icehouse-2</em> development timeframe, we
landed far less features and bugfixes than we wanted and expected. That
created concerns about us losing our velocity, so I run a little
analysis to confirm or deny that feeling.</p>
<h3>Velocity loss ?</h3>
<p>If we compare <em>icehouse</em> to the <em>havana</em> cycle and focus on implemented
blueprints (not the best metric), it is pretty obvious that <em>icehouse-2</em>
was disappointing:</p>
<blockquote>
<p>havana-1: 63<br>
havana-2: 100<br>
icehouse-1: 69<br>
icehouse-2: 50</p>
</blockquote>
<p>Using the first milestone as a baseline (growth of 10% expected), we
should have been at 110 blueprints, so we are at 45% of the expected
results. That said, looking at bugs gives a slightly different picture:</p>
<blockquote>
<p>havana-1: 671<br>
havana-2: 650<br>
icehouse-1: 738<br>
icehouse-2: 650</p>
</blockquote>
<p>The first milestone baseline again gives a 10% expected growth, which
means the target was 715 bugs... but we "only" fixed 650 bugs (like in
<em>havana-2</em>). So on the bugfixes front, we are at 91% of the expected
result.</p>
<h3>Comparing with grizzly</h3>
<p>But <em>havana</em> is not really the cycle we should compare <em>icehouse</em> with.
We should compare with another cycle where the end-of-year holidays hit
during the -2 milestone development... so <strong><em>grizzly</em></strong>. Let's look at
the number of commits (ignoring merges), for a number of projects that
have been around since then. Here are the results for nova:</p>
<blockquote>
<p>nova grizzly-1: 549 commits<br>
nova grizzly-2: 465 commits<br>
nova icehouse-1: 548 commits<br>
nova icehouse-2: 282 commits</p>
</blockquote>
<p>Again using the -1 milestone as a baseline for expected growth (here
+0%), nova in <em>icehouse-2</em> ended up at 61% of the expected number of
commits. The results are similar for neutron:</p>
<blockquote>
<p>neutron grizzly-1: 155 commits<br>
neutron grizzly-2: 128 commits<br>
neutron icehouse-1: 203 commits<br>
neutron icehouse-2: 110 commits</p>
</blockquote>
<p>Considering the -1 milestones gives an expected growth in commits
between <em>grizzly</em> and <em>icehouse</em> of +31%. <em>Icehouse-2</em> is at 66% of
expected result. So not good but not catastrophic either. What about
cinder ?</p>
<blockquote>
<p>cinder grizzly-1: 86 commits<br>
cinder grizzly-2: 54 commits<br>
cinder icehouse-1: 175 commits<br>
cinder icehouse-2: 119 commits</p>
</blockquote>
<p>Now that's interesting... Expected cinder growth between <em>grizzly</em> and
<em>icehouse</em> is +103%. <em>Icehouse-2</em> scores at 108% of the expected,
<em>grizzly</em>-based result.</p>
<blockquote>
<p>keystone grizzly-1: 95 commits<br>
keystone grizzly-2: 42 commits<br>
keystone icehouse-1: 116 commits<br>
keystone icehouse-2: 106 commits</p>
</blockquote>
<p>That's even more apparent with keystone, which had a quite disastrous
<em>grizzly-2</em>: expected growth is +22%, Icehouse-2 is at 207% of the
expected result. Same for Glance:</p>
<blockquote>
<p>glance grizzly-1: 100 commits<br>
glance grizzly-2: 38 commits<br>
glance icehouse-1: 98 commits<br>
glance icehouse-2: 89 commits</p>
</blockquote>
<p>Here we expect 2% less commits, so based on <em>grizzly-2</em> we should have
had 37 commits... <em>icehouse-2</em> here is at 240% !</p>
<p>In summary, while it is quite obvious that we delivered far less than we
wanted to, due to the holidays and the recent gate issues, from a
velocity perspective <em>icehouse-2</em> is far from being disastrous if you
compare it to the last development cycle where the holidays happened at
the same time in the cycle. Smaller projects in particular have handled
that period significantly better than last year.</p>
<p>We just need to integrate the fact that the October - April cycle
includes a holiday period that will reduce our velocity... and lower our
expectations as a result.</p>OpenStack @ FOSDEM '142014-01-09T14:12:00+01:002014-01-09T14:12:00+01:00Thierry Carreztag:ttx.re,2014-01-09:/openstack-fosdem-14.html<p>Every year, free and open source developers from all over Europe and
beyond converge in cold Brussels for a week-end of talks, hacking and
beer. OpenStack will be present !</p>
<p>We have a number of devroom and lightning talks already scheduled:</p>
<p><em>Saturday 12:20 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Autoscaling …</strong></p><p>Every year, free and open source developers from all over Europe and
beyond converge in cold Brussels for a week-end of talks, hacking and
beer. OpenStack will be present !</p>
<p>We have a number of devroom and lightning talks already scheduled:</p>
<p><em>Saturday 12:20 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Autoscaling best practices</strong><br>
Marc Cluet will look into autoscaling using Heat and Ceilometer as
examples.</p>
<p><em>Saturday 13:00 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Network Function Virtualization and Network Service Insertion and
Chaining</strong><br>
Balaji Padnala will present NFV and how to deploy it using OpenStack
and OpenFlow Controller.</p>
<p><em>Saturday 13:40 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>oVirt and OpenStack Storage (present and future)</strong><br>
Federico Simoncelli will cover integration between oVirt and
Glance/Cinder for storage needs.</p>
<p><em>Saturday 15:00 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Why, Where, What and How to contribute to OpenStack</strong><br>
I will go through a practical introduction to OpenStack development and
explain why you should contribute if you haven't already.</p>
<p><em>Saturday 16:20 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Hypervisor Breakouts - Virtualization Vulnerabilities and OpenStack
Exploitation</strong><br>
Rob Clark will explore this class of interesting vulnerabilities from
an OpenStack perspective.</p>
<p><em>Saturday 17:40 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>oVirt applying Nova scheduler concepts for data center
virtualization</strong><br>
Gilad Chaplik will present how oVirt could reuse OpenStack Nova
scheduling concepts.</p>
<p><em>Sunday 10:00 in U.218A (Testing and automation devroom)</em><br>
<strong>Preventing craziness: a deep dive into OpenStack testing
automation</strong><br>
Me again, in a technical exploration on the OpenStack gating system and
its unique challenges.</p>
<p><em>Sunday 13:40 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Tunnels as a Connectivity and Segregation Solution for Virtualized
Networks</strong><br>
Join Assaf Muller for an architectural, developer oriented overview of
(GRE and VXLAN) tunnels in OpenStack Networking.</p>
<p><em>Sunday 16:20 in Chavanne (Virtualization and IaaS devroom)</em><br>
<strong>Bring your virtualized networking stack to the next level</strong><br>
Mike Kolesnik will look into integration opportunities between oVirt
and OpenStack Neutron.</p>
<p><em>Sunday 17:00 in Ferrer (Lightning talks room)</em><br>
<strong>Putting the PaaS in OpenStack</strong><br>
~~Dirk~~ Diane Mueller will give us an update on cross-community
collaboration between OpenStack, Solum, Docker and OpenShift.</p>
<p><em>Sunday 17:20 in Ferrer (Lightning talks room)</em><br>
<strong>Your Complete Open Source Cloud</strong><br>
Dave Neary should explain how to mix OpenStack with oVirt, OpenShift
and Gluster to build a complete private cloud.</p>
<p>We'll also have a booth manned by OpenStack community volunteers ! I
hope to see you all there.</p>Lessons and outcomes from the Hong-Kong summit2013-11-13T14:23:00+01:002013-11-13T14:23:00+01:00Thierry Carreztag:ttx.re,2013-11-13:/lessons-and-outcomes-from-the-hong-kong-summit.html<p>Just back from an amazing week at the OpenStack Summit in Hong-Kong, I
would like to share a number of discussions we had (mainly on the
<a href="http://icehousedesignsummit.sched.org/overview/type/release+management">release management
track</a>)
and mention a few things I learned there.</p>
<p>First of all, <strong>Hong-Kong is a unique city</strong>. Skyscrapers built on
vertiginous slopes …</p><p>Just back from an amazing week at the OpenStack Summit in Hong-Kong, I
would like to share a number of discussions we had (mainly on the
<a href="http://icehousedesignsummit.sched.org/overview/type/release+management">release management
track</a>)
and mention a few things I learned there.</p>
<p>First of all, <strong>Hong-Kong is a unique city</strong>. Skyscrapers built on
vertiginous slopes, crazy population density, awesome restaurants, shops
everywhere... Everything is clean and convenient (think: Octopus cards),
even as it grows extremely fast. Everyone should go there at least one
time in their lives !</p>
<p>On the Icehouse Design Summit side,<strong>the collaboration magic happened
again</strong>. I should be used to it by now, but it is still amazing to build
this level playing field for open design, fill it with smart people and
see them make so much progress over 4 days. We can still improve,
though: for example I'll make sure we get whiteboards in every room for
the next time :). As was mentioned in the feedback session, we are
considering staggering the design summit and the conference (to let
technical people participate to the latter), set time aside to discuss
cross-project issues, and set up per-project space so that collaboration
can continue even if there is no scheduled "session" going on.</p>
<p>I have been mostly involved in <a href="https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Release_Management">release management
sessions</a>.
We discussed the <a href="https://wiki.openstack.org/wiki/Icehouse_Release_Schedule">Icehouse release
schedule</a>,
with a <strong>proposed release date of April 17</strong>, and the possibility to
have a pre-designated <strong>"off" week</strong> between release and the J design
summit. We discussed changes in the format of the weekly
<a href="https://wiki.openstack.org/wiki/Meetings/ProjectMeeting">project/release status
meeting</a>, where
we should move per-project status updates off-meeting to be able to
focus on cross-project issues instead. During this cycle we should also
work on streamlining library release announcements. For stable branch
maintenance, we decided to officially drop support for version n-2 by
feature freeze (rather than at release time), which reflects more
accurately what ended up being done during the past cycles. The security
support is now aligned to stable branch support, which should make sure
the vulnerability management team (VMT) doesn't end up having to
maintain old stable branches that are already abandoned by the stable
branch maintainers. Finally, the VMT should review the projects from all
official programs to come up with a clear list of what projects are
actually security-supported and which aren't.</p>
<p>Apart from the release management program, I'm involved in two pet
projects: <a href="https://wiki.openstack.org/wiki/Rootwrap">Rootwrap</a> and
<a href="https://github.com/openstack-infra/storyboard">StoryBoard</a>.
<strong>Rootwrap</strong> should be split from the oslo-incubator into its own
package early in the Icehouse cycle, and its usage in Nova, Cinder and
Neutron should be reviewed to result in incremental strengthening.
<strong>StoryBoard</strong> (our next-generation task tracker) generated a lot of
interest at the summit, I expect a lot of progress will be made in the
near future. Its architecture might be overhauled from the current POC,
so stay tuned.</p>
<p>Finally, it was great meeting everyone again. Our PTLs and Technical
Committee members are a bunch of awesome folks, this open source project
is in great hands. More generally, it seems that we not only designed a
new way of building software, we also created a network of individuals
and companies interested in that kind of open collaboration. That
network explains why it is so easy for people to jump from one company
to another, while continuing to do the exact same work for the OpenStack
project itself. And for developers, I think it's a great place to be in:
if you haven't already, you should definitely consider joining us.</p>An analysis of the Technical Committee election2013-10-23T10:34:00+02:002013-10-23T10:34:00+02:00Thierry Carreztag:ttx.re,2013-10-23:/an-analysis-of-the-technical-committee-election.html<p>When we
<a href="https://ttx.re/history-of-openstack-governance.html">changed</a>
the Technical Committee membership model to an all-directly-elected
model a few months ago, we proposed we would enable detailed ballot
reporting in order to be able to test alternative algorithms and run
various analysis over the data set. As an official for this election,
here is my …</p><p>When we
<a href="https://ttx.re/history-of-openstack-governance.html">changed</a>
the Technical Committee membership model to an all-directly-elected
model a few months ago, we proposed we would enable detailed ballot
reporting in order to be able to test alternative algorithms and run
various analysis over the data set. As an official for this election,
here is my analysis of <a href="http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/results.pl?id=E_5ef3f04b3c641f3b">the
results</a>,
hoping it will help in the current discussion on a potential evolution
of the Foundation individual members voting system.</p>
<h3>Condorcet method</h3>
<p>In the OpenStack technical elections, we always used the Condorcet
method (with the Schulze completion method), as implemented by
<a href="http://www.cs.cornell.edu/w8/~andru/civs/">Cornell's CIVS</a> public
voting system. In a Condorcet vote, you rank your choices in order of
preference (it's OK to rank multiple choices at the same level). To
calculate the results, you simulate 1:1 contests between all candidates
in the set. If someone wins all such contests, he is the Condorcet
winner for the set. The completion method is used to determine the
winner when there is no clear Condorcet winner. Most completion methods
can result in ties, which then need to be broken in a fair way.</p>
<h3>Condorcet spread</h3>
<p>One thing we can analyze is the spread of the rankings for any given
candidate:</p>
<p><img alt="TCelection" src="https://ttx.re/images/tc-f2013.png"></p>
<p>On that graph the bubbles on the left represent the number of high
rankings for a given candidate (bubbles on the right represent low
rankings). When multiple candidates are given the same rank, we average
their ranking (that explains all those large bubbles in the middle of
the spectrum). A loved-or-hated candidate would have large bubbles at
each end of the spectrum, while a consensus candidate would not.</p>
<p>Looking at the graph we can see how Condorcet favors consensus
candidates (Doug Hellmann, James E. Blair, John Griffith) over
less-consensual ones (Chris Behrens, Sergey Lukjanov, Boris Pavlovic).</p>
<h3>Proportional Condorcet ?</h3>
<p>Condorcet indeed favors consensus candidates (and "natural" 1:1 election
winners). It is not designed to represent factions in a proportional
way, like STV is. There is an experimental proportional representation
option in CIVS software though, and after some ballot conversion we can
run the same ballots and see what it would give.</p>
<p>I set up a test election and the results are
<a href="http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/results.pl?id=E_b3661b025d68a7da">here</a>.
The winning 11 would have included Sergey Lukjanov instead of John
Griffith, giving representation to a less-consensual candidate. That
happens even if a clear majority of voters prefers John to Sergey (John
defeats Sergey in the 1:1 Condorcet comparison by 154-76).</p>
<p>It's not better or worse, it's just different... We'll probably have a
discussion at the Technical Committee to see whether we should enable
this experimental variant, or if we prefer to test it over a few more
elections.</p>
<h3>Partisan voting ?</h3>
<p>Another analysis we can run is to determine if there was any
corporate-driven voting. We can look at the ballots and see how many of
the ballots consistently placed all the candidates from a given company
above any other candidate.</p>
<p>7.8% of ballots placed the 2 Mirantis candidates above any other. 5.2%
placed the 2 IBM candidates above any other. At the other end of the
spectrum, 0.8% of ballots placed all 5 Red Hat candidates above any
other, and 1.1% of the ballots placed all 4 Rackspace candidates above
any other. We can conclude that partisan voting was limited, and that
Condorcet's preference for consensus candidates further limited its
impact.</p>
<h3>What about STV ?</h3>
<p>STV is another ranked-choice election method, which favors proportional
representation. Like the "proportional representation" CIVS option
described above, it may result in natural Condorcet winners to lose
against more factional candidates.</p>
<p>I would have loved to run the same ballots through STV and compare the
results. Unfortunately STV requires strict ranking of candidates in an
order of preference. I tried converting the ballots and randomly
breaking similar rankings, but the end results vary extremely depending
on that randomness, so we can't really analyze the results in any useful
way.</p>
<h3>Run your own analysis !</h3>
<p>That's it for me, but you can run your own analysis by playing with the
CSV ballot file yourself ! Download it
<a href="http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/download_ballots.pl?id=E_5ef3f04b3c641f3b">here</a>,
and share the results of your analysis if you find anything interesting
!</p>
<p>.</p>Getting to Havana2013-10-18T13:43:00+02:002013-10-18T13:43:00+02:00Thierry Carreztag:ttx.re,2013-10-18:/getting-to-havana.html<p>Yesterday, as the final conclusion of the 6-month "Havana" development
cycle, we <a href="http://lists.openstack.org/pipermail/openstack-announce/2013-October/000151.html">released the latest version of
OpenStack</a>,
the 2013.2 version. It's now composed of 9 integrated components, which
saw the completion of more than 400 feature blueprints and the fixing of
more than 3000 reported bugs.</p>
<p>As always …</p><p>Yesterday, as the final conclusion of the 6-month "Havana" development
cycle, we <a href="http://lists.openstack.org/pipermail/openstack-announce/2013-October/000151.html">released the latest version of
OpenStack</a>,
the 2013.2 version. It's now composed of 9 integrated components, which
saw the completion of more than 400 feature blueprints and the fixing of
more than 3000 reported bugs.</p>
<p>As always, it's been an interesting week for me. Not as busy as you'd
think, but lots of hours, day or night, spent frantically checking
dashboards, collecting input from our fearful technical leads, waiting
for a disaster to happen and pushing the right buttons at the right
moment, finally aligning the stars and releasing everything on time.
Yes, I use checklists to make sure I don't overlook anything:</p>
<p><img alt="havana cheat sheet" src="https://ttx.re/images/havana-cheatsheet.jpg"></p>
<p>Even if I have plenty of free time between those key hours, I can't
concentrate on getting anything else done, or get something else started
(like a blog post). This is why, one day after release, I can finally
spend some time looking back on the last stage of the Havana development
cycle and see how well we performed. Here is the graph showing the
number of release-critical bugs logged against various components (those
observing the common feature freeze) as we make progress towards the
final release date:</p>
<p><img alt="havana RCs" src="https://ttx.re/images/havana-rcs.png"></p>
<p>Personally I think we were a bit late, with RC1s globally landing around
October 3 and RC2s still being published around October 15. I prefer
when we can switch to "respin only for major regressions and upgrade
issues" mode at least a week before final release, not two days before
final release. Looking at the graph, we can see where we failed: it took
us 11 days to get a grip on the RC bug count (either by fixing the right
issues, or by stopping adding new ones, or by refining the list and
dropping non-critical stuff). Part of this delay is due to stress
recovery after a rather eventful feature freeze. Part of it is lack of
prioritization and focus on the right bugs. The rest of the graph pretty
much looks like <a href="https://ttx.re/images/grizzly-rcs.png">the Grizzly
one</a>. We were
just at least one week too late.</p>
<p>We'll explore ways to improve on that during the Icehouse Design Summit
in Hong-Kong. One solution might be to add a week between feature freeze
and final release. Another solution would be to filter what gets
targeted to the last milestone to reduce the amount of features that
land late in the cycle, to reduce FeatureFreeze trauma. If you want to
be part of the discussion, <a href="http://www.openstack.org/summit/openstack-summit-hong-kong-2013">join us all in
Hong-Kong</a>
in 18 days !</p>An history of OpenStack project governance2013-06-20T10:18:00+02:002013-06-20T10:18:00+02:00Thierry Carreztag:ttx.re,2013-06-20:/history-of-openstack-governance.html<p>Over the last 3 years, the technical governance of the OpenStack open
source project evolved a lot, and most recently <a href="https://lists.launchpad.net/openstack/msg24526.html">last
Tuesday</a>. As an
elected member of that governance body since April 2011, I witnessed
that evolution from within and helped in drafting the various models
over time. Now seems …</p><p>Over the last 3 years, the technical governance of the OpenStack open
source project evolved a lot, and most recently <a href="https://lists.launchpad.net/openstack/msg24526.html">last
Tuesday</a>. As an
elected member of that governance body since April 2011, I witnessed
that evolution from within and helped in drafting the various models
over time. Now seems like a good time to look back in history, and clear
a few misconceptions about the OpenStack project governance along the
way.</p>
<h3>The POC</h3>
<p>The project was originally created by Rackspace in July 2010 and seeded
with code from NASA (Nova) and Rackspace (Swift). At that point an
<a href="https://wiki.openstack.org/w/index.php?title=Governance&oldid=8370">initial project
governance</a>
was set up. There was an <em>Advisory Board</em> (which was never really
created), the <em>OpenStack Architecture Board</em>, and technical committees
for each subproject, each lead by a Technical Lead. The OpenStack
Architecture Board had 5 members appointed by Rackspace and 4 elected by
the community, with 1-year to 3-year (!) terms. The technical leads for
the subprojects were appointed by Rackspace.</p>
<p>By the end of the year 2010 the Architecture Board was renamed <em>Project
Oversight Committee</em> (POC), but its structure <a href="https://wiki.openstack.org/w/index.php?title=Governance&oldid=8377">didn't
change</a>.
While it left room for community input, the POC was rightfully seen as
fully controlled by Rackspace, which was a blocker to deeper involvement
for a lot of the big players in the industry.</p>
<p>It was a danger for the open source project as well, as the number of
contributors external to Rackspace grew. As countless examples prove,
when the leadership of an open source project is not seen as
representative of its contributors, you face the risk of a revolt, a
fork of the code and seeing your contributors leave for a more
meritocratic and representative alternative.</p>
<h3>The PPB</h3>
<p>In March 2011, a significant change was
<a href="http://www.openstack.org/blog/2011/03/openstack-governance-update/">introduced</a>
to address this perceived risk. Technical leads for the 3 projects
(Nova, Swift, and Glance at that point) would from now on be directly
elected by their contributors and called <em>Project Technical Leads</em>
(PTLs). The POC was replaced by the <em>Project Policy Board</em> (PPB), which
had 4 seats appointed by Rackspace, 3 seats for the above PTLs, and 5
seats directly-elected by all the contributors of the project. By spring
2012 we grew to 6 projects and therefore the PPB had 15 members.</p>
<p>This was definitely an improvement, but it was not perfect. Most
importantly, the governance model itself was still owned by Rackspace,
which could potentially change it and displace the PPB if it was ever
unhappy with it. This concern was still preventing OpenStack from
reaching the next adoption step. In October 2011, Rackspace therefore
<a href="http://www.openstack.org/blog/2011/10/openstack-foundation/">announced</a>
that they would set up an independent Foundation. By the summer of 2012
that move was completed and Rackspace had transfered the control over
the governance of the OpenStack project to the <a href="http://www.openstack.org/foundation/">OpenStack
Foundation</a>.</p>
<h3>The TC</h3>
<p>At that point the governance was split into two bodies. The first one is
the <em>Board of Directors</em> for the Foundation itself, which is responsible
for promoting OpenStack, protecting its trademark, and deciding where to
best spend the Foundation's sponsors money to empower future development
of OpenStack.</p>
<p>The second body was the successor to the PPB, the entity that would
govern the open source project itself. A critical piece in the
transition was the need to preserve and improve the independence of the
technical meritocracy. The bylaws of the Foundation therefore instituted
the <em>Technical Committee</em>, a successor for the PPB that would be
self-governed, and would no longer have appointed members (or any
pay-to-play members). The Technical Committee would be <a href="https://wiki.openstack.org/w/index.php?title=Governance/Foundation/TechnicalCommittee&oldid=3208">completely
elected</a>
by the active technical contributors: a seat for each elected PTL, plus
5 directly-elected seats.</p>
<h3>TC 2.0</h3>
<p>The TC started out in September 2012 as an 11-member committee, but with
the addition of 3 new projects (and the creation of a special seat for
Oslo), it grew to 15 members in April 2013, with the perspective to grow
to 18 members in Fall 2013 if all projects applying for incubation
recently get finally accepted. With the introduction of the
<a href="https://lists.launchpad.net/openstack/msg20881.html">"integrated" project
concept</a> (separate
from the "core" project concept), we faced the addition of even more
projects in the future and committee bloat would inevitably ensue. That
created a potential for resistance to the addition of "small" projects
or the splitting of existing projects (which make sense technically but
should not be worth adding yet another TC seat).</p>
<p>Another issue was the ever-increasing representation of "vertical"
functions (project-specific PTLs elected by each project contributors)
vs. general people elected by all contributors. In the original PPB mix,
there were 3 "vertical" seats for 5 general seats, which was a nice mix
to get specific expertise but overall having a cross-project view. With
the growth in the number of projects, in the current TC we had 10
"vertical" seats for 5 general seats. Time was ripe for a reboot.</p>
<p><a href="https://wiki.openstack.org/wiki/TC_Membership_Models">Various models</a>
were considered and discussed, and while everyone agreed on the need to
change, no model was unanimously seen as perfect. In the end, simplicity
won and we picked a model with <a href="https://wiki.openstack.org/w/index.php?title=Governance/Foundation/TechnicalCommittee&oldid=24429">13 directly-elected
members</a>,
which will be put in place at the Fall 2013 elections.</p>
<h3>Power to the active contributors</h3>
<p>This new model is a direct, representative model, where if you recently
authored a change for an OpenStack project, you get one vote, and a
chance every 6 months to choose new people to represent you. This model
is pretty flexible and should allow for further growth of the project.</p>
<p>Few open source projects use such a direct governance model. In Apache
projects for example (often cited as a model of openness and
meritocracy), the oversight committee equivalent to OpenStack's TC would
be the PMC. In <a href="http://cloudstack.apache.org/bylaws.html">most</a> cases,
PMC membership is self-sustaining: existing PMC members ultimately
decide, through discussions and
<a href="http://www.apache.org/dev/pmc.html#newpmc">votes</a> on the private PMC
list, who the new PMC members should be. In contrast, in OpenStack the
recently-active contributors end up being in direct control of who their
leaders are, and can replace the Technical Committee members if they
feel like they are not relevant or representing them anymore. Oh, and
the TC doesn't use a private list: all our meetings are
<a href="http://eavesdrop.openstack.org/meetings/tc/">public</a> and our
discussions are
<a href="http://lists.openstack.org/pipermail/openstack-tc/">archived</a>.</p>
<p>As far as open source projects governance models go, this is as open,
meritocratic, transparent and direct as it gets.</p>The need for releases2013-04-11T14:32:00+02:002013-04-11T14:32:00+02:00Thierry Carreztag:ttx.re,2013-04-11:/the-need-for-releases.html<p>The beginning of a new release cycle is as good as any moment to
question why we actually go through the hassle of producing OpenStack
releases. <a href="https://wiki.openstack.org/wiki/Release_Cycle">Twice per
year</a>, on a precise date
we announce 6 months in advance, we bless and publish source code
tarballs of the various integrated …</p><p>The beginning of a new release cycle is as good as any moment to
question why we actually go through the hassle of producing OpenStack
releases. <a href="https://wiki.openstack.org/wiki/Release_Cycle">Twice per
year</a>, on a precise date
we announce 6 months in advance, we bless and publish source code
tarballs of the various integrated projects in OpenStack. Every week we
have a
<a href="https://wiki.openstack.org/wiki/Meetings/ProjectMeeting">meeting</a> that
tracks our progress toward this common goal. Why ?</p>
<h3>Releases vs. Continuous deployment</h3>
<p>The question is particularly valid if you take into account the type of
software that we produce. We don't really expect cloud infrastructure
providers to religiously download our source code tarballs every 6
months and run from that. For the largest installations, running much
closer to the master branch and continuously deploy the latest changes
is a really sound strategy. We invested a lot of effort in our gating
systems and QA automated testing to make sure the master branch is
always runnable. We'll discuss at the <a href="http://www.openstack.org/summit/portland-2013/">OpenStack
Summit</a> next week how to
improve CD support in OpenStack. We backport bugfixes to the <a href="https://wiki.openstack.org/wiki/StableBranch">stable
branches</a> post-release. So
why do we continue to single out a few commits and publish them as "the
release" ?</p>
<h3>The need for cycles</h3>
<p>The value is not really in <em>releases</em>. It is in <em>release cycles</em>.</p>
<p>Producing OpenStack involves the work of a large number of people. While
most of those people are paid to participate in OpenStack development,
as far as the OpenStack project goes, we don't manage them. We can't ask
them to work on a specific area, or to respect a given deadline, or to
spend that extra hour to finalize something. The main trick we use to
align everyone and make us all part of the same community is to have a
cycle. We have regular milestones that we ask contributors to target
their features to. We have a <a href="https://wiki.openstack.org/wiki/FeatureFreeze">feature
freeze</a> to encourage
people to switch their mindset to bugfixing. We have weekly meetings to
track progress, communicate where we are and motivate us to go that
extra mile. The common rhythm is what makes us all play in the same
team. The "release" itself is just the natural conclusion of that common
effort.</p>
<h3>A reference point in time</h3>
<p>Singling out a point in time has a number of other benefits. It's easier
to work on documentation if you group your features into a coherent set
(we actually considered shortening our cycles in the past, and the main
blocker was our capacity to produce good documentation often enough).
It's easier to communicate about OpenStack progress and new features if
you do it periodically rather than continuously. It's easier to have
<a href="https://wiki.openstack.org/wiki/Summit">Design Summits</a> every 6 months
if you create a common brainstorm / implementation / integration cycle.
The releases also serve as reference points for API deprecation rules,
for stable release maintenance, for security backports.</p>
<p>If you're purely focused on the software consumption part, it's easy to
underestimate the value of release cycles. They actually are one of the
main reasons for the pace of development and success of OpenStack so
far.</p>
<h3>The path forward</h3>
<p>We need release <em>cycles</em>... do we need release <em>deliverables</em> ? Do we
actually need to bless and publish a set of source code tarballs ? My
personal view on that is: if there is no additional cost in producing
releases, why not continue to do them ? With the release tooling we have
today, blessing and publishing a few tarballs is as simple as pushing a
tag, running a script and sending an email. And I like how this formally
concludes the development cycle to start the stable maintenance period.</p>
<p>But what about Continuous Deployment ? Well, the fact that we produce
releases shouldn't at all affect our ability to continuously deploy
OpenStack. The master branch should always be in good shape, and we
definitely should have the necessary features in place to fully support
CD. We can have both. So we should have both.</p>Grizzly, the day after2013-04-05T13:08:00+02:002013-04-05T13:08:00+02:00Thierry Carreztag:ttx.re,2013-04-05:/grizzly-the-day-after.html<p>The OpenStack Grizzly release of yesterday officially closes the Grizzly
development cycle. But while I try to celebrate and relax, I can't help
from feeling worried and depressed on the hours following the release,
as we discover bugs that we could have (should have ?) caught before
release. It's a kind …</p><p>The OpenStack Grizzly release of yesterday officially closes the Grizzly
development cycle. But while I try to celebrate and relax, I can't help
from feeling worried and depressed on the hours following the release,
as we discover bugs that we could have (should have ?) caught before
release. It's a kind of postpartum depression for release managers;
please consider this post as part of my therapy.</p>
<h3>Good</h3>
<p>We'd naturally like to release when the software is "ready", "good", or
"bug-free". Reality is, with software of the complexity of OpenStack,
onto which we constantly add new features, there will always be bugs.
So, rather than releasing when the software is bug-free, we "release"
when waiting more would not really change the quality of the result. We
release when it's time.</p>
<p>In OpenStack, we invest a lot in automated testing, and each proposed
commit goes through an extensive set of unit and integration tests. But
with so many combinations of deployment options, there are still dark
corners that will only be explored by users as they apply the new code
to their specific use case. We encourage users to try new code <em>before</em>
release, by publishing and making noise about milestones, release
candidates... But there will always be a significant number of users who
will not try new code until the point in time we call "release". So
there will always be significant bugs that are discovered (and fixed)
after release day.</p>
<h3>The best point in time</h3>
<p>What we need to do is pick the right moment to "release": when all known
release-critical issues are fixed. When the benefits of waiting more are
not worth the drawbacks of distracting developers from working on the
next development cycle, or of abandoning the benefits of a predictable
time-based common release.</p>
<p>That's the role of the <a href="https://wiki.openstack.org/wiki/Release_Cycle">Release
Candidates</a> that we
produce in the weeks before the release day. When we fixed all known
release-critical bugs, we create an RC. If we find new ones before the
release day, we fix them and regenerate a new release candidate. On
release day, we consider the current release candidates as "final" and
publish them.</p>
<p>The trick, then, is to pick the right length for this feature-frozen
period leading to release, one that gives enough time for each of the
projects in OpenStack to reach this the first release candidate
(meaning, "all known release-critical bugs fixed"), and publish this RC1
to early testers. For Grizzly, it looked like this:</p>
<p><img alt="grizzly RCs" src="https://ttx.re/images/grizzly-rcs.png"></p>
<p>This graph shows the number of release-critical bugs in various projects
over time. We can see that the length of the pre-release period is about
right: waiting more would not have resulted in a lot more bugs to be
fixed. We basically needed to release to get more users to test and
report the next bugs.</p>
<h3>The Grizzly is still alive</h3>
<p>The other thing we need to have is a process to continue to fix bugs
after the "release". We document the most obvious regressions in the
constantly-updated <a href="https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly">Release
Notes</a>. And we
handle the Grizzly bugs using the stable release update process.</p>
<p>After release, we maintain a branch where important bugfixes are
backported and from which we'll publish point releases. This
<strong>stable/grizzly</strong> branch is maintained by the OpenStack stable
maintenance team. If you see a bugfix that should definitely be
backported, you can tag the corresponding bug in Launchpad with the
<em>grizzly-backport-potential</em> tag to bring it to the team's attention.
For more information on the stable branches, I invite you to read this
<a href="https://wiki.openstack.org/wiki/StableBranch">wiki page</a>.</p>
<h3>Being pumped up again</h3>
<p>The post-release depression usually lasts a few days, until I realize
that not so many bugs were reported. The quality of the new release is
actually always an order of magnitude better than the previous releases,
due to 6-month worth of improvements in our amazing continuous
integration system ! We actually did an incredible job, and it will only
get better !</p>
<p>The final stage of recovery is when our fantastic community gets all
together at the OpenStack Summit. 4 days to witness and celebrate our
success. 4 days to recharge the motivation batteries, brainstorm and
discuss what we'll do over the next 6 months. We are living awesome
times. See you there.</p>UDS is dead, long live ODS2013-03-04T16:44:00+01:002013-03-04T16:44:00+01:00Thierry Carreztag:ttx.re,2013-03-04:/uds-is-dead-long-live-ods.html<p>Back from a (almost) entirely-offline week vacation, a lot of news were
waiting for me. A <a href="http://openstack.booktype.pro/openstack-operations-guide/_draft/_v/1.0/why-we-wrote-this-book/">full
book</a>
was written. OpenStack projects
<a href="http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-02-26-20.03.html">graduated</a>.
An Ubuntu <a href="https://lists.ubuntu.com/archives/ubuntu-devel/2013-February/036537.html">rolling release
model</a>
was considered. But what grabbed my attention was the announcement of
<a href="https://lists.ubuntu.com/archives/ubuntu-devel/2013-February/036502.html">UDS moving to a virtual
event</a>.
And every 3 months. And …</p><p>Back from a (almost) entirely-offline week vacation, a lot of news were
waiting for me. A <a href="http://openstack.booktype.pro/openstack-operations-guide/_draft/_v/1.0/why-we-wrote-this-book/">full
book</a>
was written. OpenStack projects
<a href="http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-02-26-20.03.html">graduated</a>.
An Ubuntu <a href="https://lists.ubuntu.com/archives/ubuntu-devel/2013-February/036537.html">rolling release
model</a>
was considered. But what grabbed my attention was the announcement of
<a href="https://lists.ubuntu.com/archives/ubuntu-devel/2013-February/036502.html">UDS moving to a virtual
event</a>.
And every 3 months. And over two days. And next week.</p>
<p>As someone who attended all UDSes (but one) since Prague in May 2008, as
a Canonical employee then as an upstream developer, that was quite a
shock. We all have fond memories and anecdotes of stuff that happened
during those Ubuntu developer summits.</p>
<h3>What those summits do</h3>
<p>For those who never attended one, UDS (and the OpenStack <em>Design
Summits</em> that were modeled after them) achieve a lot of goals for a
community of open source developers:</p>
<ol>
<li>Celebrate recent release, motivate all your developer community for
the next 6 months</li>
<li>Brainstorm early ideas on complex topics, identify key stakeholders
to include in further design discussion</li>
<li>Present an implementation plan for a proposed feature and get
feedback from the rest of the community before starting to work on
it</li>
<li>Reduce duplication of effort by getting everyone working on the same
type of issues in the same room and around the same beers for a few
days</li>
<li>Meet in informal settings people you usually only interact with
online, to get to know them and reduce friction that can build up
after too many heated threads</li>
</ol>
<p>This all sounds very valuable. So why did Canonical decide to suppress
UDSes as we knew them, while they were arguably part of their successful
community development model ?</p>
<h3>Who killed UDS</h3>
<p>The reason is that UDS is a very costly event, and it was becoming more
and more useless. A lot of Ubuntu development happens within Canonical
those days, and UDS sessions gradually shifted from being brainstorming
sessions between equal community members to being a formal communication
of upcoming features/plans to gather immediate feedback (point [3]
above). There were not so many brainstorming design sessions anymore
(point [2] above, very difficult to do in a virtual setting), with
design happening more and more <a href="http://www.markshuttleworth.com/archives/1200">behind Canonical
curtains</a>. There is less
need to reduce duplication of effort (point [4] above), with less
non-Canonical people starting to implement new things.</p>
<p>Therefore it makes sense to replace it with a less-costly,
purely-virtual communication exercise that still perfectly fills point
[3], with the added benefits of running it more often (updating everyone
else on status more often), and improving accessibility for remote
participants. If you add to the mix a move to rolling releases, it
almost makes perfect sense. The problem is, they also get rid of points
[1] and [5]. This will result in a even less motivated developer
community, with more tension between Canonical employees and
non-Canonical community members.</p>
<p>I'm not convinced that's the right move. I for one will certainly regret
them. But I think I understand the move in light of Canonical's recent
strategy.</p>
<h3>What about OpenStack Design Summits ?</h3>
<p>Some people have been asking me if OpenStack should move to a similar
model. My answer is <em>definitely not</em>.</p>
<p>When Rick Clark imported the UDS model from Ubuntu to OpenStack, it was
to fulfill one of the <a href="https://wiki.openstack.org/wiki/Open">4 Opens</a> we
pledged: <em>Open Design</em>. In OpenStack Design Summits, we openly debate
how features should be designed, and empower the developers in the room
to make those design decisions. Point [2] above is therefore essential.
In OpenStack we also have a lot of different development groups working
in parallel, and making sure we don't duplicate effort is key to limit
friction and make the best use of our resources. So we can't just pass
on point [4]. With more than 200 different developers authoring changes
every month, the OpenStack development community is way past <a href="http://en.wikipedia.org/wiki/Dunbar%27s_number">Dunbar's
number</a>. Thread after
thread, some resentment can build up over time between opposed
developers. Get them to informally talk in person over a coffee or a
beer, and most issues will be settled. Point [5] therefore lets us keep
a healthy developer community. And finally, with about 20k changes
committed per year, OpenStack developers are pretty busy. Having a week
to celebrate and recharge motivation batteries every 6 months doesn't
sound like a bad thing. So we'd like to keep point [1].</p>
<p>So for OpenStack it definitely makes sense to keep our Design Summits
the way they are. Running them as a track within the OpenStack Summit
allows us to fund them, since there is so much momentum around OpenStack
and so many people interested in attending those. We need to keep
improving the remote participation options to include developers that
unfortunately cannot join us. We need to keep doing it in different
locations over the world to foster local participation. But meeting in
person every 6 months is an integral part of our success, and we'll keep
doing it.</p>
<p>Next stop is in Portland, from April 15 to April 18. <a href="http://www.openstack.org/summit/">Join
us</a> !</p>OpenStack at FOSDEM '132013-01-11T13:34:00+01:002013-01-11T13:34:00+01:00Thierry Carreztag:ttx.re,2013-01-11:/openstack-at-fosdem-13.html<p>In 3 weeks, free and open source software developers will converge to
Brussels for 2+ days of talks, discussions and beer.
<a href="https://fosdem.org/2013/">FOSDEM</a> is still the largest gathering for
our community in Europe, and it will be a pleasure to meet again with
longtime friends. Note that FOSDEM attendance is free …</p><p>In 3 weeks, free and open source software developers will converge to
Brussels for 2+ days of talks, discussions and beer.
<a href="https://fosdem.org/2013/">FOSDEM</a> is still the largest gathering for
our community in Europe, and it will be a pleasure to meet again with
longtime friends. Note that FOSDEM attendance is free as in beer, and
requires no registration.</p>
<p>OpenStack will be present with a number of talks in the <a href="https://fosdem.org/2013/schedule/track/cloud/">Cloud
devroom</a> in the
<em>Chavanne</em> auditorium on Sunday, February 3rd:</p>
<ul>
<li>At 9:30, I'll open the devroom with <a href="https://fosdem.org/2013/schedule/event/state_of_openstack/"><strong>State of the OpenStack Union,
2013</strong></a>.
A talk about what happened in the OpenStack development community
since last year presentation at FOSDEM.</li>
<li>At 10:00, don't miss Mark McLoughlin's talk: <strong><a href="https://fosdem.org/2013/schedule/event/openstack_app_arch/">OpenStack: 21st
Century App Architecture and Cloud
Operations</a>.</strong>
He will expose how OpenStack is built with the same resilience and
automation principles as highly-scalable cloud applications.</li>
<li>At 15:00, Rob Clark will detail his <a href="https://fosdem.org/2013/schedule/event/security_priorities/"><strong>Security Priorities for Cloud
Developers</strong></a>:
the main security challenges OpenStack faces and what we should do
about them.</li>
<li>At 15:30, Tomas Sedovic will introduce <a href="https://fosdem.org/2013/schedule/event/openstack_heat/"><strong>Orchestrating complex
deployments on OpenStack using
Heat</strong></a>. The
Heat project is in OpenStack incubation currently so this is a great
opportunity to learn more about it.</li>
<li>Finally to close the day at 16:30, Nick Barcet, Eoghan Glynn and
Julien Danjou will storm the stage and introduce the other OpenStack
project currently in incubation: <a href="https://fosdem.org/2013/schedule/event/openstack_ceilometer/"><strong>Measuring OpenStack: the
Ceilometer
Project</strong></a>.</li>
</ul>
<p>There will also be OpenStack mentions in various other talks during the
day: Martyn Taylor should demonstrate OpenStack Horizon in conjunction
with <a href="https://fosdem.org/2013/schedule/event/image_management/">Aeolus Image
Factory</a> at
13:30, and Vangelis Koukis will present
<a href="https://fosdem.org/2013/schedule/event/synnefo/">Synnefo</a>, which
provides OpenStack APIs, at 14:00.</p>
<p>Finally, I'll also be giving a talk, directed to Python developers,
about the OpenStack job market sometimes Sunday in the <a href="https://fosdem.org/2013/schedule/track/python/">Python
devroom</a> (room
<em>K.3.401</em>): <strong>Get a Python job, work on OpenStack</strong>.</p>
<p>I hope you will join us in the hopefully-not-dead-frozen-this-time and
beautiful Brussels !</p>What to expect from Grizzly-1 milestone2012-11-23T14:14:00+01:002012-11-23T14:14:00+01:00Thierry Carreztag:ttx.re,2012-11-23:/what-to-expect-from-grizzly-1-milestone.html<p>The first milestone of the OpenStack Grizzly development cycle is <a href="http://lists.openstack.org/pipermail/openstack-announce/2012-November/000054.html">just
out</a>.
What should you expect from it ? What significant new features were
added ?</p>
<p>The first milestones in our <a href="http://wiki.openstack.org/ReleaseCycle">6-month development
cycles</a> are traditionally not
very featureful. That's because we are just out of the previous release,
and still working …</p><p>The first milestone of the OpenStack Grizzly development cycle is <a href="http://lists.openstack.org/pipermail/openstack-announce/2012-November/000054.html">just
out</a>.
What should you expect from it ? What significant new features were
added ?</p>
<p>The first milestones in our <a href="http://wiki.openstack.org/ReleaseCycle">6-month development
cycles</a> are traditionally not
very featureful. That's because we are just out of the previous release,
and still working heavily on bugs (this milestone packs 399 bugfixes !).
It's been only one month since we had our <a href="http://wiki.openstack.org/Summit">Design
Summit</a>, so by the time we formalize
its outcome into blueprints and roadmaps, we are just getting started
with feature implementation. Nevertheless, it collects a lot of new
features and bugfixes that landed in our master branches since
mid-September, when we froze features in preparation for the Folsom
release.</p>
<p><strong>Keystone</strong> is arguably where the most significant changes landed, with
a <a href="https://blueprints.launchpad.net/keystone/+spec/implement-v3-core-api">tech preview of the new API
version</a>
(v3), with <a href="https://blueprints.launchpad.net/keystone/+spec/rbac-keystone-api">policy and RBAC
access</a>
enabled. A new <a href="https://blueprints.launchpad.net/keystone/+spec/ad-ldap-identity-backend">ActiveDirectory/LDAP identity
backend</a>
was also introduced, while the auth_token middleware is <a href="https://blueprints.launchpad.net/keystone/+spec/authtoken-to-keystoneclient-repo">now
shipped</a>
with the Python Keystone client.</p>
<p>In addition to fixing <a href="https://launchpad.net/nova/grizzly/grizzly-1">185
bugs</a>, the <strong>Nova</strong> crew
<a href="https://blueprints.launchpad.net/nova/+spec/delete-nova-volume">removed
nova-volume</a>
code entirely (code was kept in Folsom for compatibility reasons, but
was marked deprecated). Virtualization drivers <a href="https://blueprints.launchpad.net/nova/+spec/no-db-virt">no longer directly
access the
database</a>, as a
first step towards completely <a href="https://blueprints.launchpad.net/nova/+spec/no-db-compute">isolating compute nodes from the
database</a>.
Snapshots are now <a href="https://blueprints.launchpad.net/nova/+spec/snapshots-for-everyone">supported on raw and LVM
disks</a>,
in addition to qcow2. On the hypervisors side, the Hyper-V driver grew
<a href="https://blueprints.launchpad.net/nova/+spec/hyper-v-config-drive-v2">ConfigDrive v2
support</a>,
while the XenServer one can now <a href="https://blueprints.launchpad.net/nova/+spec/xenserver-bittorrent-images">use
BitTorrent</a>as
an image delivery mechanism.</p>
<p>The <strong>Glance</strong> client is <a href="https://blueprints.launchpad.net/glance/+spec/separate-client">no longer
copied</a>
within Glance server (you can still find it with the Python client
library), and the Glance <a href="https://blueprints.launchpad.net/glance/+spec/glance-simple-db-parity">SimpleDB driver reaches feature
parity</a>
with the SQLAlchemy based one. A number of cleanups were implemented in
<strong>Cinder</strong>, including in <a href="https://blueprints.launchpad.net/cinder/+spec/driver-cleanup">volume drivers code
layout</a>
and <a href="https://blueprints.launchpad.net/cinder/+spec/cinder-apiv2">API versioning
handling</a>.
Support for <a href="https://blueprints.launchpad.net/cinder/+spec/xenapi-storage-manager-nfs">XenAPI storage manager for
NFS</a>
is back, while the API grew a call to <a href="https://blueprints.launchpad.net/cinder/+spec/list-bootable-volumes">list bootable
volumes</a>
and a <a href="https://blueprints.launchpad.net/cinder/+spec/cinder-hosts-extension">hosts
extension</a>
to allow service status querying.</p>
<p>The <strong>Quantum</strong> crew was also quite busy. The <a href="https://blueprints.launchpad.net/quantum/+spec/ryu-plugin-update-for-ryu">Ryu plugin was
updated</a>
and now features <a href="https://blueprints.launchpad.net/quantum/+spec/ryu-tunnel-support">tunnel
support</a>.
The preparatory work to <a href="https://blueprints.launchpad.net/quantum/+spec/quantum-service-framework">add advanced
services</a>
was landed, as well as support for <a href="https://blueprints.launchpad.net/quantum/+spec/high-available-quantum-queues-in-rabbitmq">highly-available RabbitMQ
queues</a>.
Feature parity gap with nova-network was reduced by the introduction of
a <a href="https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups">Security Groups
API</a>.</p>
<p><strong>Horizon</strong> saw a lot of changes under the hood, including <a href="https://blueprints.launchpad.net/horizon/+spec/unify-config">unified
configuration</a>.
It now supports <a href="https://blueprints.launchpad.net/horizon/+spec/flavor-extra-specs">Nova flavor extra
specs</a>.
As a first step towards providing cloud admins with more targeted
information, a <a href="https://blueprints.launchpad.net/horizon/+spec/system-info-panel">system info
panel</a>
was added. <strong>Oslo</strong> (formerly known as openstack-common) also saw a
number of improvements. The config module (cfg) was <a href="https://blueprints.launchpad.net/oslo/+spec/cfg-argparse">ported to
argparse</a>.
Common <a href="https://blueprints.launchpad.net/oslo/+spec/service-infrastructure">service management
code</a>
was pushed to the Oslo incubator, as well as a generic <a href="https://blueprints.launchpad.net/oslo/+spec/new-policy-language">policy
engine</a>.</p>
<p>That's only a fraction of what will appear in the final release of
Grizzly, scheduled for April 2013. A lot of work was started in the last
weeks but will only land in the next milestone. To get a glimpse of
what's coming up, you can follow the <a href="http://wiki.openstack.org/releasestatus/">Grizzly release status
page</a> !</p>Why OpenStack doesn't need a Linus2012-10-25T12:46:00+02:002012-10-25T12:46:00+02:00Thierry Carreztag:ttx.re,2012-10-25:/why-openstack-doesnt-need-a-linus.html<p>As comparing OpenStack with Linux becomes an <a href="http://www.devx.com/blog/is-openstack-the-linux-of-cloud.html">increasingly popular
exercise</a>,
it's only natural that people and press articles start to ask where the
Linus Torvalds of OpenStack is, or <a href="http://www.networkworld.com/news/2012/102412-openstack-linus-263659.html?hpg1=bn">who the Linus Torvalds of
OpenStack</a>
should be. This assumes that technical leaders could somehow be
appointed in OpenStack. This assumes …</p><p>As comparing OpenStack with Linux becomes an <a href="http://www.devx.com/blog/is-openstack-the-linux-of-cloud.html">increasingly popular
exercise</a>,
it's only natural that people and press articles start to ask where the
Linus Torvalds of OpenStack is, or <a href="http://www.networkworld.com/news/2012/102412-openstack-linus-263659.html?hpg1=bn">who the Linus Torvalds of
OpenStack</a>
should be. This assumes that technical leaders could somehow be
appointed in OpenStack. This assumes that the single dictator model is
somehow reproducible or even desirable. And this assumes that the
current technical leadership in OpenStack is somehow lacking. I think
all those three assumptions are wrong.</p>
<p>Like Linux, OpenStack is an Open Innovation project: an independent,
common technical playground that is not owned by a single company and
where contributors form a meritocracy. Assuming you can somehow appoint
leaders in such a setting shows a great ignorance of how those projects
actually work. Leaders in an open innovation project don't derive their
authority from their title. They derive their authority from the respect
that the other contributors have for them. If they lose this respect,
their leadership will be disputed and you'll face the risk of a fork.
<strong>Project leaders are not appointed, they are grown.</strong> Linus wasn't
appointed, and he didn't decide one day that he should lead Linux. He
grew as the natural leader for this community over time.</p>
<p>Maybe people asking for a Linus of OpenStack like the idea of a single
dictator sitting at the top. But that setup is <strong>not easily
reproduced</strong>. Three conditions need to be met: you have to be the
founder (or first developer) of the project, your project has to grow
sufficiently slowly so that you can gather the undisputed respect of
incoming new contributors, and you have to keep your hands deep in
technical matters over time (to retain that respect). Linus checked all
those boxes. In OpenStack, there were a number of developers involved in
it from the start, and the project grew really fast, so a group of
leaders emerged, rather than a single undisputed figure.</p>
<p>I'd also argue that the "single leader" model is <strong>not really
desirable</strong>. OpenStack is not a single project, it's a collection of
projects. It's difficult to find a respected expert in all areas,
especially as we grew by including new projects within the OpenStack
collection. In addition to that, Linux as a project still struggles with
its <a href="http://en.wikipedia.org/wiki/Bus_factor">bus factor</a> of 1 and how
it would survive Linus. Organizing your technical leadership in a way
that makes it easier for leadership to transition to new figures makes a
stronger and more durable community.</p>
<p>Finally, asking for a Linus of OpenStack is somehow implying that the
current technical leadership is insufficient, which is at best ignorant,
at worse insulting. Linus fills two roles within Linux: the <em>technical
lead</em> role (final decision on technical matters, the buck stops here)
and the <em>release management</em> role (coordinating the release development
cycles and producing releases). OpenStack has project technical leads
("PTLs") to fill the first role, and a (separate) release manager to
fill the second. In addition to that, to solve cross-project issues, we
have a <a href="https://www.openstack.org/foundation/technical-committee/">Technical
Committee</a>
(which happens to include the PTLs and release manager).</p>
<p>If you are under the impression that this multi-headed technical
leadership might result in non-opiniated choices, think twice. The new
governance model establishing the Technical Committee and the full
authority of it over all technical matters in OpenStack is only a month
old, previously the project (and its governance model) was still owned
by a single company. The PTLs and Technical Committee members are highly
independent and have the interests of the OpenStack project as their top
priority. Most of them actually changed employers over the last year and
continued to work on the project.</p>
<p>I think what the press and the pundits actually want is a more visible
public figure, that would make stronger design choices, if possible with
nice punch lines that would make <a href="http://en.wikiquote.org/wiki/Linus_Torvalds">good
quotes</a>. It's true that the
explosive growth of the project did not leave a lot of time so far for
technical leaders of OpenStack to engage with the press. It's true that
the OpenStack leadership tends to use friendly words and prefer
consensus where possible, which may not result in memorable quotes. But
confusing that with weakness is really a mistake. Technical leadership
in OpenStack is just fine the way it is, thank you for asking.</p>The value of Open Development2012-10-23T15:57:00+02:002012-10-23T15:57:00+02:00Thierry Carreztag:ttx.re,2012-10-23:/the-value-of-open-development.html<p>Mark's recent blogpost on <a href="http://www.markshuttleworth.com/archives/1200">Raring community
skunkworks</a> got me
thinking. I agree it would be unfair to spin this story as
Canonical/Ubuntu switching to closed development. I also agree that (as
the damage control messaging was <a href="http://www.markshuttleworth.com/archives/1207">quick to point
out</a>) inviting some
members of the community to participate in …</p><p>Mark's recent blogpost on <a href="http://www.markshuttleworth.com/archives/1200">Raring community
skunkworks</a> got me
thinking. I agree it would be unfair to spin this story as
Canonical/Ubuntu switching to closed development. I also agree that (as
the damage control messaging was <a href="http://www.markshuttleworth.com/archives/1207">quick to point
out</a>) inviting some
members of the community to participate in closed development projects
is actually a step towards more openness rather than a step backwards.</p>
<p>That said, it certainly is making the "closed development" option more
official and organized, which is not a step in the right direction in my
opinion. It reinforces it as a perfectly valid option, while I would
really like it to be an exception for corner cases. So at this point, it
may be useful to insist a bit on the benefits of open development, and
why dropping them might not be that good of an idea.</p>
<p><em>Open Development</em> is a transparent way of developing software, where
source code, bugs, patches, code reviews, design discussions, meetings
happen in the open and are accessible by everyone. "Open Source" is a
prerequisite of open development, but you can certainly do open source
without doing open development: that's what I call <em>the Android model</em>
and what others call <em>Open behind walls</em> model. You can go further than
open development by also doing "Open Design": letting an open community
of equals discuss and define the future features your project will
implement, rather than restricting that privilege to a closed group of
"core developers".</p>
<p>Open Development allows you to "release early, release often" and get
the testing, QA, feedback of (all) your users. This is actually a good
thing, not a bad thing. That feedback will help you catch corner cases,
consider issues that you didn't predict, get outside patches. More
importantly, Open Development helps lowering the <em>barrier of entry</em> for
contributors to your project. It blurs the line between consumers and
producers of the software (no more "<em>us</em> vs. <em>them</em>" mentality),
resulting in a much more engaged community. Inviting select individuals
to have early access to features before they are unveiled sounds more
like a proprietary model beta testing program to me. It won't give you
the amount of direct feedback and variety of contributors that open
development gives you. Is the trade-off worth it ?</p>
<p>How much as I dislike the Android model, I understand that the ability
for Google to give <em>some</em> select OEMs a bit of advance has <em>some</em> value.
Reading Mark's post though, it seems that the main benefits for Ubuntu
are in avoiding early exposure of immature code and get more splash PR
effect at release time. I personally think that short-term, the drop in
QA due to reduced feedback will offset those benefits, and long-term,
the resulting drop in community engagement will also make this a bad
trade-off.</p>
<p>In OpenStack, we founded the project on <a href="http://wiki.openstack.org/Open">the Four
Opens</a>: Open Source, Open Development,
Open Design and Open Community. This early decision is what made
OpenStack so successful as a community, not the "cloud" hype. Open
Development made us very friendly to new developers wanting to
participate, and once they experienced Open Design (as exemplified in
our Design Summits) they were sold and turned into advocates of our
model and our project within their employing companies. Open Development
was really instrumental to OpenStack growth and adoption.</p>
<p>In summary, I think Open Development is good because you end up
producing better software with a larger and more engaged community of
contributors, and if you want to drop that advantage, you better have a
very good reason.</p>Grizzly Design Summit schedule posted2012-10-10T14:03:00+02:002012-10-10T14:03:00+02:00Thierry Carreztag:ttx.re,2012-10-10:/grizzly-design-summit-schedule-posted.html<p>Next week our community will gather in always-sunny San Diego for the
<a href="http://www.openstack.org/summit/san-diego-2012/">OpenStack Summit</a>. Our
usual <a href="http://wiki.openstack.org/Summit">Design Summit</a> is now a part of
the general event: the <a href="http://wiki.openstack.org/Summit/Grizzly">Grizzly Design
Summit</a> sessions will run over
the 4 days of the event ! We start Monday at 9am and finish Thursday at
5 …</p><p>Next week our community will gather in always-sunny San Diego for the
<a href="http://www.openstack.org/summit/san-diego-2012/">OpenStack Summit</a>. Our
usual <a href="http://wiki.openstack.org/Summit">Design Summit</a> is now a part of
the general event: the <a href="http://wiki.openstack.org/Summit/Grizzly">Grizzly Design
Summit</a> sessions will run over
the 4 days of the event ! We start Monday at 9am and finish Thursday at
5:40pm. The schedule is now up at:</p>
<blockquote>
<p><a href="http://openstacksummitfall2012.sched.org/overview/type/design+summit">http://openstacksummitfall2012.sched.org/overview/type/design+summit</a></p>
</blockquote>
<p>This link will only show you the <em>Design Summit</em> sessions. Click
<a href="http://openstacksummitfall2012.sched.org/">here</a> for the complete
schedule. Minor scheduling changes may still happen over the next days
as people realize they are double-booked, but otherwise it's pretty
final now.</p>
<p>For newcomers, please note that the <em>Design Summit</em> is different from
classic conferences or the other tracks of the OpenStack Summit. There
are <strong>no formal presentations or speakers</strong>. The sessions at the <em>Design
Summit</em> are open discussions between contributors on a specific
development topic for the upcoming development cycle, moderated by a
session lead. It is possible to prepare a few slides to introduce the
current status and kick-off the discussion, but these should never be
formal speaker-to-audience presentations.</p>
<p>I'll be running the <a href="http://openstacksummitfall2012.sched.org/overview/type/design+summit/Process"><strong>Process</strong>
topic</a>,
which covers the development process and core infrastructure
discussions. It runs Wednesday afternoon and all Thursday, and we have a
pretty awesome set of stuff to discuss. Hope to see you there!</p>
<p>If you want to talk about something that is not covered elsewhere in the
Summit, please note that we'll have an <strong>Unconference</strong> room, open from
Tuesday to Thursday. You can grab a 40-min slot there to present
anything related to OpenStack! In addition to that, we'll also have
5-min <strong>Lightning talks</strong> after lunch on Monday-Wednesday... where you
can talk about anything you want. There will be a board posted on the
Summit floor, first come, first serve :)</p>
<p>More details about the Grizzly Design Summit can be found <a href="http://wiki.openstack.org/Summit/Grizzly">on the
wiki</a>. See you all soon!</p>Folsom is just around the corner2012-09-26T14:02:00+02:002012-09-26T14:02:00+02:00Thierry Carreztag:ttx.re,2012-09-26:/folsom-is-just-around-the-corner.html<p>It's been a long time since my last blog post... I guess that cycle was
busier for me than I expected, due to my involvement in the Foundation
technical Committee setup.</p>
<p>Anyway, we are now at the end of the 6-month Folsom journey for
OpenStack core projects, a ride which …</p><p>It's been a long time since my last blog post... I guess that cycle was
busier for me than I expected, due to my involvement in the Foundation
technical Committee setup.</p>
<p>Anyway, we are now at the end of the 6-month Folsom journey for
OpenStack core projects, a ride which involved more than <strong>330
contributors</strong>, implementing <strong>185 features</strong> and <strong>fixing more than
1400 bugs</strong> in core projects alone !</p>
<p>At release day -1 we have OpenStack 2012.2 ("Folsom") release candidates
published for all the components:</p>
<ul>
<li>OpenStack Compute (Nova), at
<a href="https://launchpad.net/nova/folsom/folsom-rc3">RC3</a></li>
<li>OpenStack Networking (Quantum), at
<a href="https://launchpad.net/quantum/folsom/folsom-rc3">RC3</a></li>
<li>OpenStack Identity (Keystone), at
<a href="https://launchpad.net/keystone/folsom/folsom-rc2">RC2</a></li>
<li>OpenStack Dashboard (Horizon), at
<a href="https://launchpad.net/horizon/folsom/folsom-rc2">RC2</a></li>
<li>OpenStack Block Storage (Cinder), at
<a href="https://launchpad.net/cinder/folsom/folsom-rc3">RC3</a></li>
<li>OpenStack Storage (Swift) at version
<a href="https://launchpad.net/swift/folsom/1.7.4">1.7.4</a></li>
</ul>
<p>We are expecting OpenStack Image Service (Glance)
<a href="https://launchpad.net/glance/+milestone/folsom-rc3">RC3</a> later today !</p>
<p>Unless a critical, last-minute regression is found today in these
proposed tarballs, they should form the official OpenStack 2012.2
release tomorrow ! Please take them for a last regression test ride, and
don't hesitate to ping us on IRC (#openstack-dev @ Freenode) or file
bugs (tagged <em>folsom-rc-potential</em>) if you think you can convince us to
reroll.</p>New nova-rootwrap landing in folsom-22012-07-02T11:52:00+02:002012-07-02T11:52:00+02:00Thierry Carreztag:ttx.re,2012-07-02:/new-nova-rootwrap-landing-in-folsom-2.html<p>This Thursday we will publish our second milestone of the Folsom cycle
for Nova. It will include <a href="https://launchpad.net/nova/+milestone/folsom-2">a number of new
features</a>, including the
one I worked on: a new, more configurable and extensible nova-rootwrap
implementation. Here is what you should know about it, depending on
whether you're a Nova …</p><p>This Thursday we will publish our second milestone of the Folsom cycle
for Nova. It will include <a href="https://launchpad.net/nova/+milestone/folsom-2">a number of new
features</a>, including the
one I worked on: a new, more configurable and extensible nova-rootwrap
implementation. Here is what you should know about it, depending on
whether you're a Nova user, packager or developer !</p>
<h2>Architecture</h2>
<h3 id="Purpose">Purpose</h3>
<p>The goal of the root wrapper is to allow the <em>nova</em> unprivileged user to
run a number of actions as the <em>root</em> user, in the safest manner
possible. Historically, Nova used a specific <em>sudoers</em> file listing
every command that the <em>nova</em> user was allowed to run, and just used
<strong>sudo</strong> to run that command as <em>root</em>. However this was difficult to
maintain (the <em>sudoers</em> file was in packaging), and did not allow for
complex filtering of parameters (advanced filters). The rootwrap was
<a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">designed</a>
to solve those issues.</p>
<h3 id="How_rootwrap_works">How rootwrap works</h3>
<p>Instead of just calling <strong>sudo make me a sandwich</strong>, Nova calls <strong>sudo
nova-rootwrap /etc/nova/rootwrap.conf make me a sandwich</strong>. A generic
<em>sudoers</em> entry lets the <em>nova</em> user run <strong>nova-rootwrap</strong> as <em>root</em>.
nova-rootwrap looks for filter definition directories in its
configuration file, and loads <strong>command filters</strong> from them. Then it
checks if the command requested by Nova matches one of those filters, in
which case it executes the command (as <em>root</em>). If no filter matches, it
denies the request.</p>
<h3 id="Security_model">Security model</h3>
<p>The escalation path is fully controlled by the <em>root</em> user. A <em>sudoers</em>
entry (owned by <em>root</em>) allows <em>nova</em> to run (as <em>root</em>) a specific
rootwrap executable, and only with a specific configuration file (which
should be owned by <em>root</em>). nova-rootwrap imports the Python modules it
needs from a cleaned (and system-default) PYTHONPATH. The configuration
file (also <em>root</em>-owned) points to <em>root</em>-owned filter definition
directories, which contain <em>root</em>-owned filters definition files. This
chain ensures that the <em>nova</em> user itself is not in control of the
configuration or modules used by the nova-rootwrap executable.</p>
<h2 id="Rootwrap_for_users">Rootwrap for users: Nova configuration</h2>
<p>Nova must be configured to use nova-rootwrap as its <em>root_helper</em>. You
need to set the following in <strong>nova.conf</strong>:</p>
<div class="highlight"><pre><span></span><code>root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
</code></pre></div>
<p>The configuration file (and executable) used here must match the one
defined in the <em>sudoers</em> entry (see below), otherwise the commands will
be rejected !</p>
<h2 id="Rootwrap_for_packagers">Rootwrap for packagers</h2>
<h3 id="Sudoers_entry">Sudoers entry</h3>
<p>Packagers need to make sure that Nova nodes contain a <em>sudoers</em> entry
that lets the <em>nova</em> user run nova-rootwrap as <em>root</em>, pointing to the
<em>root</em>-owned rootwrap.conf configuration file and allowing any parameter
after that:</p>
<div class="highlight"><pre><span></span><code>nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf *
</code></pre></div>
<h3 id="Filters_path">Filters path</h3>
<p>Nova looks for a <em>filters_path</em> in <strong>rootwrap.conf</strong>, which contains
the directories it should load filter definition files from. It is
recommended that Nova-provided filters files are loaded from
<strong>/usr/share/nova/rootwrap</strong> and extra user filters files are loaded
from <strong>/etc/nova/rootwrap.d</strong>.</p>
<div class="highlight"><pre><span></span><code><span class="k">[DEFAULT]</span>
<span class="na">filters_path</span><span class="o">=</span><span class="s">/etc/nova/rootwrap.d,/usr/share/nova/rootwrap</span>
</code></pre></div>
<p>Directories defined on this line should all exist, be owned and
writeable only by the <em>root</em> user.</p>
<h3 id="Filter_definitions">Filter definitions</h3>
<p>Finally, packaging needs to install, for each node, the filters
definition file that corresponds to it. You should <strong>not</strong> install any
other filters file on that node, otherwise you would allow extra
unneeded commands to be run by <em>nova</em> as <em>root</em>.</p>
<p>The filter file corresponding to the node must be installed in one of
the <em>filters_path</em> directories (preferably /usr/share/nova/rootwrap).
For example, on compute nodes, you should only have
/usr/share/nova/rootwrap/compute.filters. The file should be owned and
writeable only by the <em>root</em> user.</p>
<p>All filter definition files can be found in Nova source code under
etc/nova/rootwrap.d.</p>
<h2 id="Rootwrap_for_plug-in_writers">Rootwrap for plug-in writers: adding new run-as-root commands</h2>
<p>Plug-in writers may need to have the <em>nova</em> user run additional commands
as <em>root</em>. They should use nova.utils.execute(run_as_root=True) to
achieve that. They should create their own filter definition file and
install it (owned and writeable only by the <em>root</em> user !) into one of
the <em>filters_path</em> directories (preferably /etc/nova/rootwrap.d). For
example the foobar plugin could define its extra filters in a
/etc/nova/rootwrap.d/foobar.filters file.</p>
<p>The format of the filter file is defined
<a href="http://wiki.openstack.org/Nova/Rootwrap#Reference">here</a>.</p>
<h2 id="Rootwrap_for_core_developers">Rootwrap for core developers</h2>
<h3 id="Adding_new_run-as-root_commands-1">Adding new run-as-root commands</h3>
<p>Core developers may need to have the <em>nova</em> user run additional commands
as <em>root</em>. They should use nova.utils.execute(run_as_root=True) to
achieve that, and add a filter for the command they need in the
corresponding etc/nova/rootwrap.d/ .filters file in Nova's source code.
For example, to add a command that needs to be tun by network nodes,
they should modify the etc/nova/rootwrap.d/network.filters file.</p>
<p>The format of the filter file is defined
<a href="http://wiki.openstack.org/Nova/Rootwrap#Reference">here</a>.</p>
<h3 id="Adding_your_own_filter_types">Adding your own filter types</h3>
<p>The default filter type, CommandFilter, is pretty basic. It only checks
that the command name matches, it does not perform advanced checks on
the command arguments. A number of other more command-specific filter
types are available, see
<a href="http://wiki.openstack.org/Nova/Rootwrap#Reference">here</a>.</p>
<p>That said, you can easily define new filter types to further control
what exact command you actually allow the <em>nova</em> user to run as <em>root</em>.
See nova/rootwrap/filters.py for details.</p>
<p>This documentation, together with a reference section detailing the file
formats, is available <a href="http://wiki.openstack.org/Nova/Rootwrap">on the
wiki</a>.</p>Bug Triage day results2012-06-08T10:07:00+02:002012-06-08T10:07:00+02:00Thierry Carreztag:ttx.re,2012-06-08:/bug-triage-day-results.html<p>How did the OpenStack <a href="http://wiki.openstack.org/BugDays/20120607BugTriage">Bug Triage day we organized
yesterday</a> go ? Did
organizing an event make a difference ? Here are the results !</p>
<p>Nova has more bugs than all the other core projects combined, and the
most slack to clean up. We went from 237 "New" bugs at the beginning of …</p><p>How did the OpenStack <a href="http://wiki.openstack.org/BugDays/20120607BugTriage">Bug Triage day we organized
yesterday</a> go ? Did
organizing an event make a difference ? Here are the results !</p>
<p>Nova has more bugs than all the other core projects combined, and the
most slack to clean up. We went from 237 "New" bugs at the beginning of
the day and down to 42, a completion rate of 82%. In the mean time we
managed to close permanently 86 open bugs over a total of 627:</p>
<p><img alt="Nova bug day results" src="https://ttx.re/images/nova-bug1.png"></p>
<p>So the BugTriage day definitely made a difference ! Congrats to all the
participants ! It leaves our bug tracker in a lot better shape, and
created a momentum around bug triaging and having an up-to-date database
of known issues.</p>
<p>The success is even more obvious on smaller projects, with Glance,
Keystone and Quantum all managing to complete all
<a href="http://wiki.openstack.org/BugTriage">BugTriage</a> tasks in the day ! See
for example the results for Quantum:</p>
<p><img alt="Quantum bug day results" src="https://ttx.re/images/quantum-bug1.png"></p>
<p>See you all for our next <a href="http://wiki.openstack.org/BugDays">BugDay</a>...
which will most likely be a Bug Squashing Day (close as many bugs as
possible) shortly after folsom-2.</p>OpenStack BugTriage day2012-06-06T09:51:00+02:002012-06-06T09:51:00+02:00Thierry Carreztag:ttx.re,2012-06-06:/openstack-bugtriage-day.html<p>Tomorrow, <strong>Thursday June 7th</strong>, the OpenStack community will run a
BugTriage day. Why are we doing this ? What are we going to do ? How can
you participate ?</p>
<p>Bug tracking is an essential part of our development processes.
Well-maintained bug lists help us know the current state of our projects
better …</p><p>Tomorrow, <strong>Thursday June 7th</strong>, the OpenStack community will run a
BugTriage day. Why are we doing this ? What are we going to do ? How can
you participate ?</p>
<p>Bug tracking is an essential part of our development processes.
Well-maintained bug lists help us know the current state of our projects
better, define bugfixing priorities, and identify milestone and
release-critical issues.</p>
<p>The trick is, the bug lists can quickly get unusable if they are not
well-maintained. Most of our core projects managed to keep their bug
lists relevant and current, but the largest ones (Nova and Swift)
allowed some pile-up in the recent months... and that creates a vicious
circle: as the bug tracker becomes less relevant, bugs gets even less
attention, and things get worse.</p>
<p>BugTriage days are a category of
<a href="http://wiki.openstack.org/BugDays">BugDays</a> specifically designed to
break this vicious circle. They were discussed at the Folsom <a href="http://wiki.openstack.org/Summit">Design
Summit</a> as a way to improve our bug
triaging practice. The idea is to concentrate efforts, for one day, in
making our bug tracker relevant again, and start a virtuous circle of
maintenance instead of of a vicious circle of abandonment.</p>
<p>How are we going to achieve that ? The
<a href="http://wiki.openstack.org/BugTriage">BugTriage</a> page on the wiki
describes a set of triaging tasks that we should complete. Task one, for
example, is about confirming incoming, untouched "New" bugs. The goal is
to complete as many tasks as possible. Participants will gather in the
<strong>#openstack-bugday</strong> IRC channel on Freenode. It starts as soon as
it's Thursday somewhere in the world, and will last as long as it's
still Thursday somewhere. We will track the results of our efforts live
on <a href="http://wiki.openstack.org/bugstats/">pretty graphs</a>, to quantify how
well we do.</p>
<p>So please join us tomorrow in that long-overdue Spring cleaning effort,
which will go a long way into making Folsom an awesome OpenStack release
! You can read more about the whole event
<a href="http://wiki.openstack.org/BugDays/20120607BugTriage">here</a>.</p>A community maturing2012-04-25T13:16:00+02:002012-04-25T13:16:00+02:00Thierry Carreztag:ttx.re,2012-04-25:/a-community-maturing.html<p>A few days after an intense and fruitful OpenStack Design summit, I just
recovered enough from jet lag to deliver my impressions in written form.
We put a lot of smart people into rooms to discuss various subjects
hastily defined while we were busy releasing Essex... and the magic
worked …</p><p>A few days after an intense and fruitful OpenStack Design summit, I just
recovered enough from jet lag to deliver my impressions in written form.
We put a lot of smart people into rooms to discuss various subjects
hastily defined while we were busy releasing Essex... and the magic
worked again: open collaboration between developers from competing
companies, strong but always polite technical discussions, lots of
decisions, teams of developers with common interests forming,
duplication of effort avoided...</p>
<p>It's clear that the format (mostly inherited from Ubuntu's Developers
Summits) works very well in our open innovation project: everybody comes
with a plan that is open to modifications and the developers are
empowered with decision making. This makes the design summit sessions
very appealing to developers, turning them into advocates of our
development model in their companies, removing any barriers to
contribution that could be left. Being part of the OpenStack community
is just pleasant !</p>
<p>However this edition was a bit different from previous ones. There were
a lot of signs that our community is maturing. With OpenStack growing,
developers can no longer follow every session and give their opinion on
every subject: they have to pick their fights, and trust the other
developers to come up with the right design in sessions they can't
attend. So sessions had a lot less advice-giving people and a lot more
people actually signing up to do work. The topics were much more
deployers-oriented and much less about changing to the latest shiny
stuff. Even less glamorous sessions like bug triaging, documentation,
internationalization or stable branch maintenance saw a lot a
participants present, and signing up to help.</p>
<p>People realized that OpenStack is here to stay, and that strategic
contributions are necessary for it to reach the final stages of its
long-term world domination plans. When did that switch happen ? A graph
recently published in the <a href="http://www.openstack.org/blog/2012/04/community-weekly-review-apr-6-13/">community
newsletter</a> shows
the change happening a few months into Essex:</p>
<p><img alt="Issues opened and fixed" src="https://ttx.re/images/issues-opened-fixed.png"></p>
<p>As you can see, people used to care about fixing bugs in spikes around
release times. But starting around November, 2011, we see the bugfixes
curve starting to follow the bugreporting curve more closely.</p>
<p>After the Diablo release I advocated for companies to put their money
where their mouth is and start <a href="https://ttx.re/the-next-step-for-openstack.html">contributing strategically to
OpenStack</a>.
I'm happy to see that it happened during the Essex cycle, and that the
awesome Design Summit we just had confirms that trend.</p>OpenStack Folsom Design Summit2012-04-12T10:12:00+02:002012-04-12T10:12:00+02:00Thierry Carreztag:ttx.re,2012-04-12:/openstack-folsom-design-summit.html<p>In a few days the OpenStack developer community will gather in the heart
of San Francisco for three days of brainstorming and discussions around
the next release cycle of OpenStack projects, code-named "Folsom".</p>
<p>The Design Summit is a key moment for our open innovation community.
This is not a conference …</p><p>In a few days the OpenStack developer community will gather in the heart
of San Francisco for three days of brainstorming and discussions around
the next release cycle of OpenStack projects, code-named "Folsom".</p>
<p>The Design Summit is a key moment for our open innovation community.
This is not a conference with speakers. This is not where a closed
developer group announces to the public the changes they intend to push
to their private "open source" project. We design, discuss and make
decisions at the summit as a community. It's quite uncommon, and that's
what makes us different.</p>
<p>Our (elected) PTLs have final say in case of unsolvable conflicts, but
generally consensus is reached in those face-to-face meetings much more
easily than on mailing-lists. That's why this is a critical moment, and
we need to make the best use of this short time together. Connect with
other people interested to solve the same issues, avoid duplication of
work, and collaborate with developers from all those different companies
on making OpenStack awesome.</p>
<p>We have great brainstorming topics for those three days. Most
<a href="http://wiki.openstack.org/Summit">tracks</a> already have a tentative
schedule posted at <a href="http://folsomdesignsummit2012.sched.org/">http://folsomdesignsummit2012.sched.org/</a>, although
it's still subject to scheduling changes. If you have a new idea for a
session, it's too late to get in the official tracks, but we provide an
<em>Unconference</em> room for talks that could not fit in the tracks,
last-minute ideas and continuation of discussions. And since we like to
talk about random stuff that matters to us, we will also have 5-min
lightning talks every day after lunch.</p>
<p>Session leads should take the time to view Jim Plamondon's <a href="http://www.youtube.com/watch?v=GmY4cK_6NKY">training
video</a>, it's a great
introduction on how to make the most of a session you lead. I hope to
meet all of you in person next week !</p>Ask not what OpenStack can do for you...2012-04-04T12:09:00+02:002012-04-04T12:09:00+02:00Thierry Carreztag:ttx.re,2012-04-04:/ask-not-what-openstack-can-do-for-you.html<p>Over the last months I've seen more and more tweets and news articles
using the formulation "OpenStack should", as in "OpenStack should
support Amazon APIs since it's the de-facto standard". I think there is
a fundamental misconception there and I'd like to address it.</p>
<p>As a quick aside (and contrary …</p><p>Over the last months I've seen more and more tweets and news articles
using the formulation "OpenStack should", as in "OpenStack should
support Amazon APIs since it's the de-facto standard". I think there is
a fundamental misconception there and I'd like to address it.</p>
<p>As a quick aside (and contrary to what the twittersphere sometimes
report), it should be noted that OpenStack Nova always supported the
Amazon EC2 API, and that OpenStack Swift grew an Amazon S3 compatibility
layer last year. That said, I'll be the first to admit that one could
rightfully claim that the AWS API support in OpenStack is in less better
shape than the OpenStack API support. But the reason behind it is not
some "OpenStack strategy", it's a reflection of the participating
companies focus.</p>
<p>OpenStack is a true <em>Open Innovation</em> project. It's a collaboration
ground where multiple companies are free to invest development resources
to care about the stuff that is important to them. It's an influence
game where you need to donate developers to play: OpenStack is the
playing field, not the players that push the ball.</p>
<p>Red Hat cared about QPID support, they fielded developers to make it
happen in OpenStack. EC2 API support is originally in Nova because NASA
cared about it. Then with the increase of Rackspace's influence on the
project, the OpenStack API grew faster. Now with Canonical (and others)
interest, Amazon's API support is getting better. Ultimately, code
talks, and you can make things happen. That's what makes OpenStack so
appealing but also so confusing to the industry.</p>
<p>As "OpenStack", we need to make sure the playing field is level (and
hopefully the Foundation will be set up soon enough to address that) and
that the code is modular and welcoming. But it's up to the participating
companies, which throw development resources at the project, to invest
in what's important for them or their customers. And maintain it over
the long run.</p>
<p>So whenever you say "OpenStack should", ask yourself if you shouldn't
really be saying... [Rackspace, Cisco, HP, IBM, Red Hat...] should. Ask
not what OpenStack can do for you. Ask what you can do for OpenStack.</p>OpenStack Essex: the last mile2012-04-04T09:50:00+02:002012-04-04T09:50:00+02:00Thierry Carreztag:ttx.re,2012-04-04:/openstack-essex-the-last-mile.html<p>At the time I'm writing this, we have final release candidates
published for all the components that make up OpenStack 2012.1,
codenamed "Essex":</p>
<ul>
<li>OpenStack Compute (Nova), at
<a href="https://launchpad.net/nova/essex/essex-rc3">RC3</a></li>
<li>OpenStack Image Service (Glance), at
<a href="https://launchpad.net/glance/essex/essex-rc3">RC3</a></li>
<li>OpenStack Identity (Keystone), at
<a href="https://launchpad.net/keystone/essex/essex-rc2">RC2</a></li>
<li>OpenStack Dashboard (Horizon), at
<a href="https://launchpad.net/horizon/essex/essex-rc2">RC2</a></li>
<li>OpenStack Storage (Swift) at version …</li></ul><p>At the time I'm writing this, we have final release candidates
published for all the components that make up OpenStack 2012.1,
codenamed "Essex":</p>
<ul>
<li>OpenStack Compute (Nova), at
<a href="https://launchpad.net/nova/essex/essex-rc3">RC3</a></li>
<li>OpenStack Image Service (Glance), at
<a href="https://launchpad.net/glance/essex/essex-rc3">RC3</a></li>
<li>OpenStack Identity (Keystone), at
<a href="https://launchpad.net/keystone/essex/essex-rc2">RC2</a></li>
<li>OpenStack Dashboard (Horizon), at
<a href="https://launchpad.net/horizon/essex/essex-rc2">RC2</a></li>
<li>OpenStack Storage (Swift) at version
<a href="https://launchpad.net/swift/essex/1.4.8">1.4.8</a></li>
</ul>
<p>Unless a critical, last-minute regression is found today in these
proposed packages, they should make up the official OpenStack 2012.1
release tomorrow ! Please check out those tarballs for a last check, and
don't hesitate to ping us on IRC (#openstack-dev @ Freenode) or file
bugs (tagged essec-rc-potential) if you think you can convince us to
reroll.</p>
<p>Those six months have been a long ride, with 139 features added and 1650
bugs fixed, but this is the last mile.</p>The road to OpenStack Essex release candidates2012-03-16T12:57:00+01:002012-03-16T12:57:00+01:00Thierry Carreztag:ttx.re,2012-03-16:/road-to-essex-rc.html<p>Since the beginning of March, OpenStack developers are focusing on
testing and bugfixes, with the objective of producing a release
candidate for each project.</p>
<p>Regular readers of my blog know I'm not the last one to complain when I
feel developers don't care about the release and don't participate to …</p><p>Since the beginning of March, OpenStack developers are focusing on
testing and bugfixes, with the objective of producing a release
candidate for each project.</p>
<p>Regular readers of my blog know I'm not the last one to complain when I
feel developers don't care about the release and don't participate to
that critical sequence of the cycle. I'm happy to report that the
engagement of developers around making Essex good is overwhelming.
Driven by technical leads that understand the challenges, significant
groups of old and new developers are participating, testing, assigning
themselves to bugs reported by others and fixing them in record time. As
project-focused groups, I think we are quickly maturing.</p>
<p>So, how far are we from Essex today ? The final release is set to April
5th, which means each core project needs to produce its final release
candidate before then. As of today no project did produce a release
candidate yet.</p>
<p>Swift is expected to release its final Essex version (1.4.8) on March
22. This version will be included in OpenStack Essex release, unless a
critical regression is detected in it.</p>
<p>Keystone (which every other project depends on) is still struggling with
9 RC bugs, including some key decisions to be made on configuration
handling. This is the hot spot right now: if you have free cycles,
please consider asking Joe Heck (heckj on IRC) how you can best help the
Keystone crew. I'd really like to have an RC1 for that project by
Thursday next week.</p>
<p>Glance also looks still a bit far away, with 5 RC bugs listed and slow
progress on them. I still hope Glance can be ready by Tuesday next week
though.</p>
<p>Nova is looking quite good, with 2 RC bugs left on the list. The trick
with Nova is that regressions and critical issues can hide in dark
corners of its 120k lines of code, so the focus is really on finding the
remaining issues, filing and targeting them. I expect we'll be able to
publish RC1 on Monday or Tuesday next week.</p>
<p>Horizon is almost ready (3 RC bugs left), waiting for a few fixes to
land in other projects before it can issue its RC1. It should be out
early next week.</p>
<p>Once all projects release their RC1, the hunt for the overlooked
release-critical issue will be on. It will be time to put the proposed
release to the test. In order to limit potential last-minute
regressions, only showstoppers will warrant a respin of the release
candidate, other bugs will be listed in the Known Bugs section of the
release notes. You can use the "essex-rc-potential" tag to mark bugs
that you think should be fixed before we release Essex.</p>
<p>Let's all make it rock !</p>Open development, releases and quality2012-02-21T14:07:00+01:002012-02-21T14:07:00+01:00Thierry Carreztag:ttx.re,2012-02-21:/open-dev-releases-quality.html<p>Every 6 months, as a cycle ends and we prepare the next, we look back at
our release model and try to see how we can improve it. My opinion is
that we need (once more) to evolve it, and here is why.</p>
<h2>Objectives and past evolutions</h2>
<p>Our main objective …</p><p>Every 6 months, as a cycle ends and we prepare the next, we look back at
our release model and try to see how we can improve it. My opinion is
that we need (once more) to evolve it, and here is why.</p>
<h2>Objectives and past evolutions</h2>
<p>Our main objective is to produce stable and usable software. Our
secondary objective is let stable new features reach the hands of our
users in a timely manner. Our tertiary objective is to empower
developers to work efficiently on the features and improvements of
tomorrow. In simple solutions, those three objectives are not really
compatible, and in the past we tried to make adjustments without
acknowledging the fundamental incompatibility between those objectives.</p>
<p>The <a href="http://wiki.openstack.org/CactusReleaseSchedule">Austin-Cactus
system</a> was a 3-month
cycle with various freezes. The issues with this system were that it
took too long to get features in the hands of testers and users,
developers had trouble working during the frozen periods, and not so
many people actually helped during the QA period.</p>
<p>For <a href="http://wiki.openstack.org/DiabloReleaseSchedule">Diablo</a>, we
decided to switch to a 6-month cycle with monthly milestones. Those
milestones were supposed to address the "get features into the hands of
users early" issue. To empower developers, we introduced an
almost-always-open trunk: we only had 2 weeks of feature freeze before
the Essex branch opened (and coexisted with the Diablo release branch).
The problem was the resulting quality was not good, since we accumulated
6 months' worth of features and only had 2 weeks' worth of QA on them.</p>
<p>So for <a href="http://wiki.openstack.org/EssexReleaseSchedule">Essex</a>, there
was a decision that each project would decide where to place its feature
freeze and its first release candidate (which would be when the Folsom
branch opens). Most of them decided to use the essex-3 milestone as a
soft feature freeze and essex-4 as a hard one, which reintroduced the
"closed trunk" issue. And the work needed to stabilize 6 months' worth
of features is still daunting. And we saw lots of feature freeze
exceptions for last-minute "almost there" features that can't afford to
wait another 6 months.</p>
<p>So what's the solution ? Is there a good way to (at the same time)
reduce the pain of "missing a release", have always-open development
branches, and get higher quality ?</p>
<h2>The kernel model</h2>
<p>The only way to reduce the pain of missing a release is to have shorter
time-based releases. The only way to have always-open development
branches without sacrificing release quality is to separate them
completely from release branches. Where do those truisms lead us ?</p>
<p>Another famous open innovation project has already been there, and
that's the Linux kernel. Development happens in various always-open
topic branches, and regularly a merge window opens to propose stable
features for inclusion in the mainline kernel.</p>
<p>If we manage to efficiently separate OpenStack in topics, we could adopt
the same model. We could have several topic branches where core
developers on a specific area can collaborate and review code affecting
that area. We could have frequent releases (every 6-8 weeks ?), and for
each release we could have a "merge window" where stuff from topic
branches can be proposed for release, if deemed stable enough. Between
the moment the merge window closes and the moment the final release is
cut, various release candidates can be produced, on which serious QA can
be unleashed without blocking other developers.</p>
<p>This solves all objectives. If a feature is not ready yet (according to
the team maintaining the topic branch or according to release
management), it can bake until the next merge window, which is not far
away. Regular releases ensure that improvements reach the users in a
timely manner. Development branches are always open. And with a
reasonable amount of new code in every release, QA work is facilitated,
theoretically resulting in higher release quality.</p>
<h2>Additional benefits</h2>
<p>Splitting development into topics has an additional benefit: smaller
groups developing an expertise on a specific area make better reviewers
on that specific area than a random nova-core developer that can't be an
expert in all things Nova. I'd say that each topic should be small
enough to be manageable with a team of 6 core reviewers at the maximum.
Of course, nothing prevents anyone from being active on multiple topics
if they wish.</p>
<p>More frequent releases also allows you to set themes for each release.
It's difficult to refuse some feature in a 6-month cycle, but it's easy
to delay a feature to the next 6-week cycle. In the same way some kernel
releases introduce large architectural changes and some others are more
geared towards performance improvements or stability, we could also
define themes for every release -- after all the next one is not so far
away.</p>
<h2>Challenges</h2>
<p>There are a few issues with this model obviously. Code needs to be
componentized enough so that merge pains can be limited. Changes from
every release need to be merged back into topic branches. Code needs to
be clearly separated into a set of relevant topics, each with its own
set of core reviewers maintaining the topic branch. The release team
must be staffed enough to be able to review proposed code for stability.
Bug fixes need to easily end up in the release branch. And how do those
"releases" match the 6-month period between summits ? What means
"Folsom" in that respect ?</p>
<p>I hope we can use the following weeks to discuss the devil in the
details of this possible evolution, and be ready to take a decision when
the time comes for us to gather at the Folsom Design summit.</p>FOSDEM 2012 feedback2012-02-08T14:49:00+01:002012-02-08T14:49:00+01:00Thierry Carreztag:ttx.re,2012-02-08:/fosdem-2012-feedback.html<p>I'm back from Brussels, where happened the coldest FOSDEM ever. It
started on Friday night with the traditional beer event. Since the
Delirium was a bit small to host those thousands of frozen geeks, the
FOSDEM organizers had enlisted the whole block as approved bars !</p>
<p>On the Saturday, I spent …</p><p>I'm back from Brussels, where happened the coldest FOSDEM ever. It
started on Friday night with the traditional beer event. Since the
Delirium was a bit small to host those thousands of frozen geeks, the
FOSDEM organizers had enlisted the whole block as approved bars !</p>
<p>On the Saturday, I spent most of my time in the <a href="http://fosdem.org/2012/schedule/track/virtualization_and_cloud_devroom">Cloud and
Virtualization
devroom</a>,
which I escaped only to see Simon Phipps announce the <a href="http://blogs.computerworlduk.com/simon-says/2012/02/a-new-osi-for-a-new-decade/index.htm">new
membership-based
OSI</a>,
and Paolo Bonzini talking about the KVM ecosystem (in a not technical
enough way, IMO). My own <a href="http://t.co/8NaehBzT">OpenStack talk</a> was made
a bit difficult due to the absence of mike to cover the 550-seat
Chavanne auditorium... but the next talks got one. The highlight of the
day was Ryan Lane's "infrastructure as an open source project"
presentation, about how Wikimedia Labs <a href="http://t.co/aARDOAvY">uses Git, Gerrit, Jenkins and
OpenStack</a> to handle its infrastructure like a
contributor-driven open source project. The day ended with a good and
frank <a href="http://t.co/lkmsAYFU">discussion between OpenStack developers</a>,
with upstream projects and downstream distributions.</p>
<p>On Sunday I tried to hop between devrooms, but in a lot of cases the
room was full and I couldn't enter, so I spent more time in the hallway
track. I enjoyed Soren's talk about using more prediction algorithms
(instead of simple thresholds) in monitoring systems, introducing his
<a href="http://www.surveilr.net/2011/09/15/announcing-surveilr-not-your-average-monitoring-system/">Surveilr
project</a>.
The highlight of the day was Dan Berrangé's talk about <a href="http://berrange.com/posts/2012/01/17/building-application-sandboxes-with-libvirt-lxc-kvm/">using libvirt
to run sandboxed
applications</a>,
using virt-sandbox. There are quite a few interesting uses for this, and
the performance penalty sounds more than acceptable.</p>
<p>Overall it was a great pleasure for me to attend FOSDEM this year.
Congratulations to the organizers again. I'll be back next year,
hopefully it will be warmer :)</p>About collaboration2012-02-01T17:38:00+01:002012-02-01T17:38:00+01:00Thierry Carreztag:ttx.re,2012-02-01:/about-collaboration.html<p>In recent years, as open source becomes more ubiquitous, I've seen a new
breed of participants appearing. They push their code to GitHub like you
would wear a very visible good behavior marketing badge. They copy code
from multiple open source projects, modify it, but don't contribute back
their changes …</p><p>In recent years, as open source becomes more ubiquitous, I've seen a new
breed of participants appearing. They push their code to GitHub like you
would wear a very visible good behavior marketing badge. They copy code
from multiple open source projects, modify it, but don't contribute back
their changes to upstream. They seem to consider open source as a trendy
all-you-can-eat buffet combined with a cool marketing gimmick.</p>
<p>In my opinion, this is not what open source is about. I see open source,
and more generally open innovation (which adds open design, open
development and open community), as a solution for the future. The world
is facing economical and ecological limits: it needs to stop designing
for obsolescence, produce smarter, reduce duplication of effort, and fix
the rift between consumers and producers. Open innovation encourages
synergy and collaboration. It reduces waste. It enables consumers to be
producers again. That's a noble goal, but without convergence, we can't
succeed.</p>
<p>The behavior of these new participants goes against that. I call this
the GitHub effect: you encourage access to the code, forking and
fragmentation, while you should encourage convergence and collaboration
on a key repository. And like having a "packaging made from recyclable
materials" sign on your product doesn't make it environment-friendly,
just publishing your own code somewhere under an open source license
doesn't really make it open.</p>
<p>On the extreme fringe of that movement, we also see the line with closed
source blurring. Building your own closed product on top of open source
technology, and/or abusing the word "Open" to imply that all you do is
open source, using the uncertainty to reap easy marketing benefits. I've
even seen a currently-closed-source project being featured as an open
source project to watch in 2012. We probably need to start playing
harder, denounce fake participants and celebrate good ones.</p>
<p>Some people tell me my view goes against making money with open source.
That might be true for easy, short-term money. But I don't think you
need to abuse open source to make money out of it. The long-term
benefits of open innovation are obvious, and like for green businesses,
good behavior and long-term profit go well together. Let's all make sure
we encourage collaboration and promote the good behavior, and hopefully
we'll fix this.</p>OpenStack developers meeting at FOSDEM2012-01-27T15:54:00+01:002012-01-27T15:54:00+01:00Thierry Carreztag:ttx.re,2012-01-27:/openstack-developers-meeting-at-fosdem.html<p>Next week, the European free and open source software developers will
converge to Brussels for <a href="http://fosdem.org/2012/">FOSDEM</a>. We took this
opportunity to apply for an OpenStack developers gathering in the
<a href="http://fosdem.org/2012/schedule/track/virtualization_and_cloud_devroom">Virtualization and
Cloud</a>
devroom.</p>
<p>At 6pm on Saturday (last session of the day), in the Chavanne room, we
will have a …</p><p>Next week, the European free and open source software developers will
converge to Brussels for <a href="http://fosdem.org/2012/">FOSDEM</a>. We took this
opportunity to apply for an OpenStack developers gathering in the
<a href="http://fosdem.org/2012/schedule/track/virtualization_and_cloud_devroom">Virtualization and
Cloud</a>
devroom.</p>
<p>At 6pm on Saturday (last session of the day), in the Chavanne room, we
will have a one-hour town hall meeting. If you're an existing OpenStack
contributor, a developer considering to join us, an upstream project
developer, a downstream distribution packager, or just curious about
OpenStack, you're welcome to join us ! I'll be there, Stefano Maffulli
(our community manager) will be there, and several OpenStack core
developers will be there.</p>
<p>We'll openly discuss issues and solutions about integration with
upstream projects, packaging, governance, development processes,
community or release cycles. In particular, we'll have a distribution
panel where every OpenStack distribution will be able to explain how
they support OpenStack and discuss what we can improve to make things
better for them.</p>
<p>And at the end of the session we can informally continue the discussion
around fine Belgian beers or their famous
<a href="http://en.wikipedia.org/wiki/Carbonade_flamande">Carbonade</a> !</p>Making more solid OpenStack releases2012-01-18T14:16:00+01:002012-01-18T14:16:00+01:00Thierry Carreztag:ttx.re,2012-01-18:/making-more-solid-openstack-releases.html<p>As we pass the middle of the <a href="http://wiki.openstack.org/EssexReleaseSchedule">Essex development
cycle</a>, questions about
the solidity of this release start to pop up. After all, the previous
releases were far from stellar, and with more people betting their
business on OpenStack we can't really afford another half-baked release.</p>
<p>Common thinking (mostly coming …</p><p>As we pass the middle of the <a href="http://wiki.openstack.org/EssexReleaseSchedule">Essex development
cycle</a>, questions about
the solidity of this release start to pop up. After all, the previous
releases were far from stellar, and with more people betting their
business on OpenStack we can't really afford another half-baked release.</p>
<p>Common thinking (mostly coming from years of traditional software
development experience) is that we shouldn't release until it's ready,
or good enough, and calls early for pushing back the release dates. This
assumes the issue is incidental: that we underestimated the time it
would take our finite team of internal developers working on bugs to
reach a sufficient level of quality.</p>
<p>OpenStack, being an open source project produced by a large community,
works differently. We have a near-infinite supply of developers. The
issue is, unfortunately, more structural than incidental. The lack of
solidity for a release comes from:</p>
<ul>
<li><strong>Lack of focus on generic bugfixes.</strong> Developers should work on
fixing bugs. Not just the ones they filed or the ones blocking them
in their feature-adding frenzy. Fixing identified, targeted, known
issues. The bugtracker is full of them, but they don't get
attention.</li>
<li><strong>Not enough automated testing to efficiently catch regressions.</strong>
Even if everyone was working on bug fixes, if half your fixes end up
creating a set of regressions, then there is no end to it.</li>
<li><strong>Lack of bug triaging resources.</strong> Only a few people work on
confirming, triaging and prioritizing the flow of incoming bugs. So
the bugs that need the most attention are lost in the noise.</li>
</ul>
<p>For the Diablo cycle, we had less than a handful of people focused on
generic bugfixing. The rest of our 150+ authors were busy working on
something else. Pushing back the release for a week, a month or a year
won't help OpenStack solidity if the focus doesn't switch. And if our
focus switches, then there will be no need for a costly release delay.</p>
<h3>Acting now to make Essex a success</h3>
<p>During the Essex cycle, our Project Technical Leads have done their
share of the work by using a very early milestone for their feature
freeze. Keystone, Glance and Nova will freeze at <em>Essex-3</em>, giving us 10
weeks for bugfixing work (compared to the 4 weeks we had for Diablo).
Now we need to take advantage of that long period and really switch our
mindset away from feature development and towards generic bug fixing.</p>
<p>Next week we'll hit feature freeze, so <strong>now</strong> is the time to switch.
If we could:</p>
<ul>
<li>have some more developers working on increasing our integration and
unit test coverage</li>
<li>have the rest of the developers really working on generic bug fixing</li>
<li>have very active core reviewers that get more anal-retentive as we
get closer to release, to avoid introducing regressions that would
not be caught by our automated tests</li>
</ul>
<p>...then I bet that it will lead to a stronger release than any delaying
of the release could give you. Note that we'll also have a <a href="http://wiki.openstack.org/BugSquashingDay/20120202">bug
squashing day</a> on
February 2 that will hopefully help us getting on top of old, deprecated
and easy fixes, and give us a clear set of targets for the rest of the
cycle.</p>
<p>That's on our ability to switch our focus that hinges the quality of
future OpenStack releases. That's on what we'll be judged. The world
awaits, and the time is now.</p>Virtualization & Cloud devroom at FOSDEM2012-01-13T14:43:00+01:002012-01-13T14:43:00+01:00Thierry Carreztag:ttx.re,2012-01-13:/virtualization-cloud-devroom-at-fosdem.html<p>The Free and Open source Software Developers' European Meeting, or
<a href="http://fosdem.org/2012/">FOSDEM</a>, is an institution that happens every
year in Brussels. A busy, free and open event that gets a lot of
developers together for two days of presentations and cross-pollination.
There are typically the FOSDEM main tracks (a set of …</p><p>The Free and Open source Software Developers' European Meeting, or
<a href="http://fosdem.org/2012/">FOSDEM</a>, is an institution that happens every
year in Brussels. A busy, free and open event that gets a lot of
developers together for two days of presentations and cross-pollination.
There are typically the FOSDEM main tracks (a set of presentations
chosen by the FOSDEM organization) and a set of devrooms, which are
topic-oriented or project-oriented and can organize their own schedule
freely.</p>
<p>This year, FOSDEM will host an unusual devroom, the Virtualization and
Cloud devroom. It will happen in the Chavanne room, a 550-seat
auditorium that was traditionally used for main tracks. And it will last
for two whole days, while other devrooms typically last for a day or a
half-day.</p>
<p>The Virtualization and Cloud devroom is the result of the merging of
three separate devroom requests: Virtualization, Xen and OpenStack
devrooms. It gives us a larger space and a lot of potential for
cross-pollination across projects ! We had a lot of talks proposed, and
here is an overview of what you'll be able to see there.</p>
<h4>Saturday, February 4</h4>
<p>Saturday will be the "cloud" day. We will start with a set of talks
about <strong>OpenStack, past, present and future</strong>. I will do an
<a href="http://fosdem.org/2012/schedule/event/openstack_news">introduction and
retrospective</a> of
what happened last year in the project, Soren Hansen will <a href="http://fosdem.org/2012/schedule/event/hacking_on_nova">guide new
developers to
Nova</a>, and Debo
Dutta will look into future work on <a href="http://fosdem.org/2012/schedule/event/app_scheduling">application scheduling and
Donabe</a>. Next
we'll have a session on various<strong>cloud-related technologies</strong>:
<a href="http://fosdem.org/2012/schedule/event/libguestfs">libguestfs</a>,
<a href="http://fosdem.org/2012/schedule/event/pacemaker_cloud">pacemaker-cloud</a>
and <a href="http://fosdem.org/2012/schedule/event/opennebula">OpenNebula</a>. The
afternoon will start with a nice session on <strong>cloud interoperability</strong>,
including presentations on the
<a href="http://fosdem.org/2012/schedule/event/aeolus">Aeolus</a>,
<a href="http://fosdem.org/2012/schedule/event/compatibleone">CompatibleOne</a> and
<a href="http://fosdem.org/2012/schedule/event/deltacloud">Deltacloud</a>
<a href="http://fosdem.org/2012/schedule/event/dmtf_deltacloud">efforts</a>. We'll
continue with a session on <strong>cloud deployment</strong>, with a strong OpenStack
focus: Ryan Lane will talk about how Wikimedia maintains infrastructure
<a href="http://fosdem.org/2012/schedule/event/wikimedia_infra">like an open source
project</a>, Mike
McClurg will look into
<a href="http://fosdem.org/2012/schedule/event/openstack_xcp_ubuntu">Ubuntu+XCP+OpenStack</a>
deployments, and Dave Walker will introduce the <a href="http://fosdem.org/2012/schedule/event/cloud_orchestration">Orchestra
project</a>. The
day will end with a <a href="http://fosdem.org/2012/schedule/event/osdem">town hall
meeting</a> for all
<strong>OpenStack developers</strong>, including a panel of distribution packagers: I
will blog more about that one in the next weeks.</p>
<h4>Sunday, February 5</h4>
<p>Sunday is more "virtualization" day ! The day will start early with two
presentations by Hans de Goede about
<a href="http://fosdem.org/2012/schedule/event/spice">Spice</a> and <a href="http://fosdem.org/2012/schedule/event/usb_network_redirect">USB
redirection over the
network</a>.
Then we'll have a session on <strong>virtualization management</strong>, with Guido
Trotter giving more <a href="http://fosdem.org/2012/schedule/event/ganeti_news">Ganeti
news</a> and
<a href="http://fosdem.org/2012/schedule/event/ovirt_intro">three</a>
<a href="http://fosdem.org/2012/schedule/event/ovirt_engine_core">talks</a>
<a href="http://fosdem.org/2012/schedule/event/ovirt_vdsm">about</a> oVirt. In the
afternoon we'll have a more technical session around <strong>virtualization in
development</strong>: Antti Kantee will introduce ultralightweight kernel
service virtualization with <a href="http://fosdem.org/2012/schedule/event/rump_kernels">rump
kernels</a>, Renzo
Davoli will lead a <a href="http://fosdem.org/2012/schedule/event/tracing_virt_workshop">workshop on tracing and
virtualization</a>,
and Dan Berrange will show how to build application <a href="http://fosdem.org/2012/schedule/event/libvirt_lxc_kvm_sandboxes">sandboxes on top of
LXC and KVM with
libvirt</a>.
The day will end with another developers meeting, this time the <strong>Xen
developers</strong> will meet around Ian Campbell and his <a href="http://fosdem.org/2012/schedule/event/xen">Xen deployment
troubleshooting workshop</a>.</p>
<p>All in all, that's two days packed with very interesting presentations,
in a devroom large enough to accomodate a good crowd, so we hope to see
you there !</p>Ending the year well: OpenStack Essex-2 milestone2011-12-20T17:03:00+01:002011-12-20T17:03:00+01:00Thierry Carreztag:ttx.re,2011-12-20:/ending-the-year-well-openstack-essex-2-milestone.html<p>2011 is almost finished, and what a year it has been. We started it with
two core projects and one release behind us. During 2011, we got three
releases out of the door, grew from 60 code contributors to about 200,
added three new core projects, and met for two …</p><p>2011 is almost finished, and what a year it has been. We started it with
two core projects and one release behind us. During 2011, we got three
releases out of the door, grew from 60 code contributors to about 200,
added three new core projects, and met for two design summits.</p>
<p>The Essex-2 milestone was released last week. Here is our now-regular
overview of the work that made it to OpenStack core projects since the
previous milestone.</p>
<p>Nova was the busiest project. Apart from my work on a new <a href="https://blueprints.launchpad.net/nova/+spec/nova-rootwrap">secure root
wrapper</a>, we
added a pair of OpenStack API extensions to support the
<a href="https://blueprints.launchpad.net/nova/+spec/nova-volume-snapshot-backup-api">creation of snapshots and backups of
volumes</a>,
the <a href="https://blueprints.launchpad.net/nova/+spec/separate-nova-metadata">metadata
service</a>
can now run separately from the API node, network limits can now be set
using a <a href="https://blueprints.launchpad.net/nova/+spec/bandwidth-rate-limit-multipliers-and-base-limits">per-network base and a per-flavor
multiplier</a>,
and a small usability feature lets you retrieve the <a href="https://blueprints.launchpad.net/nova/+spec/lasterror">last
error</a> that
occurred using nova-manage. But Essex is not about new features, it's
more about consistency and stability. On the consistency front, the <a href="https://blueprints.launchpad.net/nova/+spec/xenapi-ha-nova-network">HA
network mode was extended to support
XenServer</a>,
KVM compute nodes now <a href="https://blueprints.launchpad.net/nova/+spec/kvm-report-capabilities">report
capabilities</a>
to zones like Xen ones, and the Quantum network manager now <a href="https://blueprints.launchpad.net/nova/+spec/quantum-nat-parity">supports
NAT</a>.
Under the hood, <a href="https://blueprints.launchpad.net/nova/+spec/nova-vm-state-management">VM state
transitions</a>
have been strengthened, the network data model
<a href="https://blueprints.launchpad.net/nova/+spec/compute-network-info">has</a>
<a href="https://blueprints.launchpad.net/nova/+spec/network-info-model">been</a>
overhauled, internal interfaces now support <a href="https://blueprints.launchpad.net/nova/+spec/internal-uuids">UUID instance
references</a>,
and unused callbacks have <a href="https://blueprints.launchpad.net/nova/+spec/remove-virt-driver-callbacks">been
removed</a>
from the virt driver.</p>
<p>The other projects were all busy starting larger transitions (Keystone's
RBAC, Horizon new user experience, and Glance 2.0 API), leaving less
room for essex-2 features. Glance still saw the addition of a <a href="https://blueprints.launchpad.net/glance/+spec/custom-disk-buffer">custom
directory for data
buffering</a>.
Keystone introduced <a href="https://blueprints.launchpad.net/keystone/+spec/global-templates">global endpoints
templates</a>
and <a href="https://blueprints.launchpad.net/keystone/+spec/keystone-swift-acls">swauth-like ACL
enforcement</a>.
Horizon added UI support for <a href="https://blueprints.launchpad.net/horizon/+spec/cert-download">downloading RC
files</a>,
while migrating under the hood from <a href="https://blueprints.launchpad.net/horizon/+spec/migrate-to-bootstrap">jquery-ui to
bootstrap</a>,
and adding a <a href="https://blueprints.launchpad.net/horizon/+spec/environment-versioning">versioning
scheme</a>
for environment/dependencies.</p>
<p>The next milestone is in a bit more than a month: January 26th, 2012.
Happy new year and holidays to all !</p>Improving Nova privilege escalation model, part 32011-11-30T14:29:00+01:002011-11-30T14:29:00+01:00Thierry Carreztag:ttx.re,2011-11-30:/improving-nova-privilege-escalation-model-part-3.html<p>In the previous two posts of this series, we explored the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">deficiencies
of the current
model</a>
and the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-2.html">features of an alternative
implementation</a>.
In this last post, we'll discuss the advantages of a Python
implementation and open discussion on how to secure it properly.</p>
<h3>Python implementation</h3>
<p>It's quite easy to …</p><p>In the previous two posts of this series, we explored the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">deficiencies
of the current
model</a>
and the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-2.html">features of an alternative
implementation</a>.
In this last post, we'll discuss the advantages of a Python
implementation and open discussion on how to secure it properly.</p>
<h3>Python implementation</h3>
<p>It's quite easy to implement the features that were mentioned in the
previous post in Python. The main advantage of doing so is that the code
can happily live inside Nova code, in particular the filters definition
files can be implemented as Python modules that are loaded if present.
That solves the issue of shipping definitions within Nova and also the
separation of allowed commands based on locally-deployed nodes. The code
is simple and easy to review. The trick is to make sure that no
malicious code can be injected in the elevated rights process. This is
why I'd like to present a model and open it for comments in the
community.</p>
<h3>Proposed security model</h3>
<p>The idea would be to have Nova code optionally use "sudo nova-rootwrap"
instead of "sudo" as the <em>root_helper.</em> A generic <em>sudoers</em> file would
allow the <em>nova</em> user to run <em>/usr/bin/nova-rootwrap</em> as <em>root</em>, while
stripping environment variables like <em>PYTHONPATH</em>. To load its filters
definitions, <em>nova-rootwrap</em> would try to import a set of predefined
modules (like <em>nova.rootwrap.compute</em>), but if those aren't present, it
should ignore them. Can this model be abused ?</p>
<p>The obvious issue is to make sure <em>sys.path</em> (the set of directories
from which Python imports its modules) is secure, so that nobody can
insert their own modules in the process. I've given some thoughts to
various checks, but actually there is no way around trusting the default
<em>sys.path</em> you're given when you start <em>python</em> as <em>root</em> from a cleaned
env. If that's compromised, you're toasted the moment you "import sys"
anyway. So using <em>sudo</em> to only allow <em>/usr/bin/nova-rootwrap</em> and
cleaning the environment should be enough. Or am I missing something ?</p>
<h3>Insecure mode ?</h3>
<p>One thing we could do is check that <em>sys.path</em> all belongs to <em>root</em> and
refuse to run in the case it's not. That would tell the user that his
setup is insecure (potentially allowing him to bypass that by running
"sudo nova-rootwrap --insecure" as the <em>root_helper</em>). But that's a
convenience to detect insecure setups, not a security addition (the fact
that it doesn't complain doesn't mean you're safe, it could mean you're
already compromised).</p>
<h3>Test mode ?</h3>
<p>For tests, it's convenient to allow to run code from branches. To allow
this (unsafe) mode, you would tweak <em>sudoers</em> to allow it to run
<em>\$BRANCH/bin/nova-rootwrap</em> as <em>root</em>, and prepend ".." to <em>sys.path</em>
in order to allow modules to be loaded from <em>\$BRANCH</em> (maybe requiring
<em>--insecure</em> mode for good measure). It sounds harmless, since if you
run from <em>/usr/bin/nova-rootwrap</em> you can assume that <em>/usr</em> is safe...
Or should that idea be abandoned altogether ?</p>
<h3>Audit</h3>
<p>Nothing beats peer review when it comes to secure design. I call all
Python module-loading experts and security white-hats out there: would
this work ? Are those safe assumptions ? How much do you like <em>insecure</em>
and <em>test</em> modes ? Would you suggest something else ? If you're one of
those that can't think in words but require code, you can get a glimpse
of work in progress
<a href="https://github.com/ttx/nova/compare/master...root-wrapper">here</a>. It
will all be optional (and not used by default), so it can be added to
Nova without much damage, but I'd rather do it right from the beginning
:) Please comment !</p>Improving Nova privilege escalation model, part 22011-11-25T11:00:00+01:002011-11-25T11:00:00+01:00Thierry Carreztag:ttx.re,2011-11-25:/improving-nova-privilege-escalation-model-part-2.html<p>In the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">previous post in this
series</a>
we explored the current privilege escalation model used in OpenStack
Compute (Nova), and discussed its limitations. Now that we are able to
plug an alternative model (thanks to the <em>root_helper</em> option), we'll
discuss in this post what features this one should have. If …</p><p>In the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">previous post in this
series</a>
we explored the current privilege escalation model used in OpenStack
Compute (Nova), and discussed its limitations. Now that we are able to
plug an alternative model (thanks to the <em>root_helper</em> option), we'll
discuss in this post what features this one should have. If you think we
need more, please comment !</p>
<h3>Command filters</h3>
<p>The most significant issue with the current model is that <em>sudoers</em>
filters the executable used, but not the arguments. To fix that, our
alternative model should allow precise argument filtering so that only
very specific commands are allowed. It should use lists of filters: if
one matches, the command is executed.</p>
<p>The basic <em>CommandFilter</em> would just check that the executable name
matches (which is what sudoers does). A more advanced <em>RegexpFilter</em>
would check that the number of arguments is right and that they all
match provided regular expressions.</p>
<p>Taking that concept a step further, you should be able to plug any type
of advanced filter. You may want to check that the argument to the
command is an existing directory. Or one that is owned by a specific
user. The framework should allow developers to define their own
<em>CommandFilter</em> subclasses, to be as precise as they want when filtering
the most destructive commands.</p>
<h3>Running as</h3>
<p>In some cases, Nova runs, as <em>root</em>, commands that it should just run as
a different user. For example, it runs <em>kill</em> with <em>root</em> rights to
interact with <em>dnsmasq</em> processes (owned by the <em>nobody</em> user). It
doesn't really need to run <em>kill</em> with <em>root</em> rights at all. Filters
should therefore also allow to specify a lower-privileged user a
specific matching command should run under.</p>
<h3>Shipping filters in Nova code</h3>
<p>Filter lists should live within Nova code and be deployed by packaging,
rather than live in packaging. That allows people adding a new escalated
command to add the corresponding filter in the same commit.</p>
<h3>Limiting commands based on deployed nodes</h3>
<p>As mentioned in the <a href="https://ttx.re/improving-nova-privilege-escalation-model-part-1.html">previous
post</a>,
<em>nova-api</em> nodes don't actually need to run any command as <em>root</em>, but
in the current model their <em>nova</em> user is still allowed to run plenty of
them. The solution for that is to separate the command filters based on
the type of node that is allowed to run them, in different files. Then
deploy the <em>nova-compute</em> filters file only on <em>nova-compute</em> nodes, the
<em>nova-volume</em> filters file only on <em>nova-volume</em> nodes... A pure
<em>nova-api</em> node will end up with no filters being deployed at all,
effectively not being allowed any command as root. So this can be solved
by smart packaging of filter files.</p>
<h3>Missing features ?</h3>
<p>Those are the features that I found useful for our alternative privilege
escalation model. If you see others, please comment here ! I'd like to
make sure all the useful features are included. In the next post, we'll
discuss a proposed Python implementation of this framework, and the
challenges around securing it.</p>Improving Nova privilege escalation model, part 12011-11-23T16:31:00+01:002011-11-23T16:31:00+01:00Thierry Carreztag:ttx.re,2011-11-23:/improving-nova-privilege-escalation-model-part-1.html<p>In this series, I'll discuss how to strengthen the privilege escalation
model for OpenStack Compute (Nova). Due to the way networking,
virtualization and volume management work, some Nova nodes need to be
able to run some commands as root. To reduce the effects of a potential
compromise (attacker being able …</p><p>In this series, I'll discuss how to strengthen the privilege escalation
model for OpenStack Compute (Nova). Due to the way networking,
virtualization and volume management work, some Nova nodes need to be
able to run some commands as root. To reduce the effects of a potential
compromise (attacker being able to run arbitrary code as the Nova user),
we want to limit the commands that Nova can run as root on a given node
to the strict necessary. Today we'll explain how the current model
works, its limitations, and the groundwork already implemented during
the Diablo cycle to improve that.</p>
<h3>Current model: sudo and sudoers</h3>
<p>Currently, in a typical Nova deployment, the nodes run under an account
with limited rights (usually called "nova"). When Nova needs to run a
command as root, it prepends "sudo" to the command. The nova packages of
your distribution of choice are supposed to ship a <strong>sudoers</strong> file that
contains all the commands that nova is allowed to run as root without
providing a password. This is a privilege escalation security model
which is pretty well-known and easy to audit.</p>
<h3>Limitations of the current model</h3>
<p>That said, in the context of Nova, this model is very limited. The
sudoers file does not allow to efficiently filter arguments, so you can
basically pass any argument to the allowed command... and some of the
commands that nova wants to use are rather open-ended. As an example,
the current nova_sudoers file contains commands like <em>chown</em>, <em>kill</em>,
<em>dd</em> or <em>tee</em>, which are more than enough to compromise a target system
completely.</p>
<p>There are a couple other limitations. The sudoers file belongs to the
distributions packaging, so it's difficult to keep it in sync with the
rest of Nova code when someone wants to add a privileged command. Last
but not least, the same nova_sudoers file is used for any type of Nova
node. A Nova API server, which does not <em>need</em> to run any command as
root, is still allowed to run all the commands that a compute node
requires, for example. Those other limitations could be fixed while
still using sudo and sudoers files, but the first limitation would
remain. Can we do better ?</p>
<h3>Substitute a wrapper to sudo</h3>
<p>To be able to propose alternative privilege escalation security models,
we first needed to be able to change all the "sudo" calls in the code
and make them potentially use something else. That's <a href="https://blueprints.launchpad.net/nova/+spec/refactor-privesc">what I worked
on</a> late
during the Diablo timeframe: creating a <em>run_as_root</em> option in
nova.utils.execute that would use a configurable <strong>root_helper</strong>
command (by default, "sudo"), and force all the existing calls to go
through that (rather than blindly calling "sudo" themselves).</p>
<p>Thanks to the default root_helper, everything still behaves the same,
but now we have the possibility to use <em>something else</em>, if we can be
smarter than sudoers files. Like call a wrapper that will do advanced
filtering of the command that nova wants to use. In part 2 of this
series, we'll look into a proposed, alternative Python-based
root_helper and open discussion on its security model.</p>OpenStack Essex-1 milestone2011-11-14T14:50:00+01:002011-11-14T14:50:00+01:00Thierry Carreztag:ttx.re,2011-11-14:/openstack-essex-1-milestone.html<p>Last week saw the delivery of the first milestone of the Essex
development cycle for Keystone, Glance, Horizon and Nova. This early
milestone collected about two months of post-Diablo work... but it's not
as busy in new features as most would think, since a big part of those
last two …</p><p>Last week saw the delivery of the first milestone of the Essex
development cycle for Keystone, Glance, Horizon and Nova. This early
milestone collected about two months of post-Diablo work... but it's not
as busy in new features as most would think, since a big part of those
last two months was spent releasing OpenStack 2011.3 and brainstorming
Essex features.</p>
<p>Keystone delivered their first milestone as a core project, with a few
new features like support for <a href="https://blueprints.launchpad.net/keystone/+spec/support-multiple-credentials">additional
credentials</a>,
<a href="https://blueprints.launchpad.net/keystone/+spec/keystone-service-registration">service
registration</a>
and using <a href="https://blueprints.launchpad.net/keystone/+spec/2-way-ssl">certificate-based SSL client authentication to authenticate
services</a>. It
should be easier to upgrade from now on, with support for <a href="https://blueprints.launchpad.net/keystone/+spec/database-migrations">database
migrations</a>.</p>
<p>Glance developers were busy preparing significant changes that will land
in the next milestone. Several bugfixes and a few features made it to
essex-1 though, including the long-awaited <a href="https://blueprints.launchpad.net/glance/+spec/support-ssl">SSL client
connections</a>.
It also moved to <a href="https://blueprints.launchpad.net/glance/+spec/uuid-image-identifiers">UUID image
identifiers</a>.</p>
<p>The Nova essex-1 effort was mostly spent on bugfixing, with <a href="https://launchpad.net/nova/+milestone/essex-1">129 bugs
fixed</a>. New features
include a new <a href="https://blueprints.launchpad.net/nova/+spec/xenapi-sm-support">XenAPI SM volume
driver</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/quantum-dhcp-parity">DHCP support in the Quantum network
manager</a>,
and optional <a href="https://blueprints.launchpad.net/nova/+spec/deferred-delete-instance">deferred deletion of
instances</a>.
Under the hood, the <a href="https://blueprints.launchpad.net/nova/+spec/volume-cleanup">volume
code</a> was
significantly cleaned up and <a href="https://blueprints.launchpad.net/nova/+spec/xml-templates">XML
templates</a>
were added to simplify serialization in extensions.</p>
<p>Essex-1 was also the first official OpenStack milestone for Horizon,
also known as the Dashboard. New features include a <a href="https://blueprints.launchpad.net/horizon/+spec/instance-detail">instance
details</a>
page, support for <a href="https://blueprints.launchpad.net/horizon/+spec/volumes-interface">managing Nova
volumes</a>
and a new <a href="https://blueprints.launchpad.net/horizon/+spec/extensible-architecture">extensible modular
architecture</a>.
The rest of the effort was spent on catching up with the best of core
projects in
<a href="https://blueprints.launchpad.net/horizon/+spec/update-localization">internationalization</a>,
<a href="https://blueprints.launchpad.net/horizon/+spec/sphinx-docs">developer</a>
<a href="https://blueprints.launchpad.net/horizon/+spec/horizon-doc-site">documentation</a>,
and QA (<a href="https://blueprints.launchpad.net/horizon/+spec/frontend-testing">frontend
testing</a>
and <a href="https://blueprints.launchpad.net/horizon/+spec/javascript-unit-tests">JS unit
tests</a>).</p>
<p>Now, keep your seatbelt fastened, as we are one month away from essex-2,
where lots of new development work is expected to land !</p>Four areas for strategic contributions in OpenStack2011-10-06T18:46:00+02:002011-10-06T18:46:00+02:00Thierry Carreztag:ttx.re,2011-10-06:/four-areas-for-strategic-contributions-in-openstack.html<p>The OpenStack Essex Design Summit just ended, and several people those
last three days have asked me to give a bit more substance to what I
exactly meant by "Strategic contributions" in my <a href="https://ttx.re/the-next-step-for-openstack.html">last
article</a>.
Ensure the long-term health of the project by investing in
project-centered resources, right, but what …</p><p>The OpenStack Essex Design Summit just ended, and several people those
last three days have asked me to give a bit more substance to what I
exactly meant by "Strategic contributions" in my <a href="https://ttx.re/the-next-step-for-openstack.html">last
article</a>.
Ensure the long-term health of the project by investing in
project-centered resources, right, but what can we do now ? What actions
can we take today ?</p>
<p>Based on the very interesting Summit discussions we had, I think the
strategic contributions that can be made today fall into 4 categories.</p>
<h3>Commonality</h3>
<p>Brian Lamar had a great session on reviving the OpenStack Common effort:
identifying common functions between OpenStack projects, converge
towards the same implementation, and maintain it in a common library.
The goal is double: present a more uniform face (logs and configuration
files, for example, should follow the same syntax), and make sure that
we don't waste precious development resources on useless duplicate
works. This effort failed in the past due to lack of resources being
dedicated long-term to it, so it sounds like a nice and easy area to
start contributing strategically.</p>
<h3>Consistency</h3>
<p>The second (and related) area is consistency. Tactical contributions
have advanced the state of very specific features applying to very
specific setups, at the expense of the resulting coherence. Vish lead a
good session on making the featureset between KVM and Xen hypervisors
converge, not only in terms of functions, but also in term of concepts.
I think that analysis needs to happen more generally in OpenStack: is
the resulting product coherent ? How can we plug the holes in those
feature matrixes ?</p>
<h3>Security</h3>
<p>Another important area that emerged from the Summit, especially with Ray
Hookway's session, is work on security. Strengthen the architecture (to
limit the attack surface and lay defense in depth), formalize the
process around vulnerablity handling and disclosure, and coordinate the
necessary effort on auditing. This work is just getting started, and I
hope I will find time to help setting it up.</p>
<h3>Quality</h3>
<p>Last but certainly not least, we need to invest in durable quality. Jay
Pipes pushed a number of sessions where we pinpointed the need to
identify the issues (QA), fix them (Bug squads) and prevent them from
happening again (automated tests & continuous integration). That's by
far the most complex area and the most difficult to coordinate, but the
basic resource needed there is manpower, and the setup of
company-neutral common workgroups that everyone can contribute to is the
first step.</p>
<p>Whether you bet your business on OpenStack, or you're just interested in
the long-term health of the open source project, give your developers
time to contribute to those areas and workgroups, and we'll all be a lot
better as a result.</p>The next step for OpenStack2011-09-28T12:13:00+02:002011-09-28T12:13:00+02:00Thierry Carreztag:ttx.re,2011-09-28:/the-next-step-for-openstack.html<p>Just after a release, discovery of significant bugs always revives
discussion around the need for maintenance branches or point releases.
Those discussions, however, are not solving the root cause for the
issue, but merely try to do damage control on the consequences.</p>
<p>The root cause for presence of significant bugs …</p><p>Just after a release, discovery of significant bugs always revives
discussion around the need for maintenance branches or point releases.
Those discussions, however, are not solving the root cause for the
issue, but merely try to do damage control on the consequences.</p>
<p>The root cause for presence of significant bugs in a given release is
not the presence or absence of maintenance branches. It's not about the
choice of time-based cycles, or the length of it. It's about lack of
focus on testing and fixing the release deliverables. If only a few
people work on that, while all the others are busy adding new features
in trunk, delaying your release by one or more weeks won't change
anything.</p>
<h3>From tactical to strategic contributions</h3>
<p>OpenStack is one of the few open source projects where development is
truly shared across multiple companies. The trick is, most companies
involved so far are doing what I call <em>tactical contributions</em>: adding a
feature that they care about, fix bugs that affect them. Tactical
contributions are great to expand a project scope, community and
mindshare, however they add technical debt. Companies involved need to
move to what I call <em>strategic contributions</em>: funding development
resources that care about the end result, the release deliverables, the
absence of bugs, the coherence of the features.</p>
<p>The obvious comparison point is the Linux kernel. The reason why it's
successful, despite lots of companies only involved in tactical
contributions, is that at its core it has a strong group of key
developers whose primary allegiance goes to the Linux kernel itself, no
matter what company they happen to work for. Those companies understood
the necessity of funding strategic contributions.</p>
<p>Currently, especially in Nova, it's quite difficult to get merge
proposals reviewed, random bugs fixed, integration tests contributed, or
holes in scope covered. That's because most groups are focused on their
own objectives, rather than the common project objectives. That's the
mindset we need to change now, and that's the only thing that can give
us better releases.</p>
<h3>The cost of strategic contributions</h3>
<p>The problem with strategic contributions is that they are typically more
costly than tactical contributions, which have a more obvious return on
investment. Accepting to have developers on payroll "fixing what needs
to be fixed", or giving 30% free time to all your developers so that
they can work on project objectives rather than only your own is not
that easy. But OpenStack has now proven that it's here to stay, lots of
companies have now bet their strategy on it, so I think the time is now.</p>
<p>If we don't adjust, OpenStack in general (and Nova in particular) will
crumble under the technical debt of tactical contributions, and everyone
involved will lose. We might need to adjust governance to encourage
other companies to invest long-term in project-centered resources. We'll
need to set up open, multi-company workgroups (like the recently-setup
QA team) to clearly show that it's a common effort. It won't happen in a
day, but if we don't change our mindset now, no matter how we adjust the
release cycle, Essex deliverables will be of the same quality as Diablo.</p>Proposing sessions for the Essex Design Summit2011-09-07T12:26:00+02:002011-09-07T12:26:00+02:00Thierry Carreztag:ttx.re,2011-09-07:/proposing-sessions-for-essex-design-summit.html<p>In less than a month the OpenStack development community will gather in
Boston for three days of discussions and brainstorming around the Essex
development cycle.</p>
<p>The main part of the summit is the session tracks. The sessions are
proposed by the participants and should generally be about core or
incubated …</p><p>In less than a month the OpenStack development community will gather in
Boston for three days of discussions and brainstorming around the Essex
development cycle.</p>
<p>The main part of the summit is the session tracks. The sessions are
proposed by the participants and should generally be about core or
incubated projects. There are three types of sessions:</p>
<ul>
<li><strong>Brainstorm</strong> sessions (55 min.) are used to discuss and come up
with a solution for complex issues.</li>
<li><strong>Rubberstamp</strong> sessions (25 min.) are used to present and review an
already-designed plan. Those should generally be linked to a project
blueprint.</li>
<li><strong>Discovery</strong> sessions (25 min.) where experts go into deep detail
into a section of code or feature.</li>
</ul>
<p>You can already go to <a href="http://summit.openstack.org">http://summit.openstack.org</a> and see or file
session proposals. Deadline for proposals is September 27, and the
sooner you propose, the more chances you have to get accepted. The
proposals will be reviewed by the PTLs and myself and, if accepted, will
get scheduled in one of the available time slots. Sessions about
official Core or Incubated projects will get priority.</p>
<p>The other part of the summit is an
<a href="http://en.wikipedia.org/wiki/Unconference">unconference</a>: we will have
a whole room dedicated to 55 min. presentations that will be scheduled
directly on big whiteboards at the summit itself. Any presentation on
any subject vaguely related to OpenStack is acceptable ! We'll also have
half-an-hour worth of 5-minute lightning talks after lunch every day,
also scheduled directly at the summit itself on a first come first serve
basis.</p>
<p>See for reference: <a href="http://wiki.openstack.org/Summit">http://wiki.openstack.org/Summit</a></p>
<p>I hope that this mix of scheduled sessions and unconference style will
allow everyone to make the most of those three days. See you there !</p>Essex Design Summit: the waiting list is open2011-09-01T09:48:00+02:002011-09-01T09:48:00+02:00Thierry Carreztag:ttx.re,2011-09-01:/essex-design-summit-the-waiting-list-is-open.html<p>The 200 open seats for the Essex Design Summit were all registered in
less than 9 days ! If you missed the boat, you can still register on the
waiting list at <a href="http://summit.openstack.org">http://summit.openstack.org</a>.</p>
<p>For the last seats we need to give priority to existing OpenStack
developers and upstream …</p><p>The 200 open seats for the Essex Design Summit were all registered in
less than 9 days ! If you missed the boat, you can still register on the
waiting list at <a href="http://summit.openstack.org">http://summit.openstack.org</a>.</p>
<p>For the last seats we need to give priority to existing OpenStack
developers and upstream/downstream community members, so the waiting
list will be reviewed manually. You will receive an email if you get
cleared and get one of the very last seats for the summit.</p>
<p>Sometime next week, the website should allow registered attendees (as
well as attendees on the waiting list) to propose sessions for the
summit, so stay tuned !</p>Features are in: the diablo-4 milestone2011-08-31T12:53:00+02:002011-08-31T12:53:00+02:00Thierry Carreztag:ttx.re,2011-08-31:/the-diablo-4-milestone.html<p>August was very busy for OpenStack Nova and Glance developers, and the
culmination of those efforts is the delivery of the final feature
milestone of the Diablo development cycle: diablo-4.</p>
<p>Glance <a href="https://launchpad.net/glance/+milestone/diablo-4">gained</a> final
<a href="https://blueprints.launchpad.net/glance/+spec/authentication">integration with the Keystone common authentication
system</a>,
support for <a href="https://blueprints.launchpad.net/glance/+spec/shared-images">sharing images between groups of
tenants</a>, a
new …</p><p>August was very busy for OpenStack Nova and Glance developers, and the
culmination of those efforts is the delivery of the final feature
milestone of the Diablo development cycle: diablo-4.</p>
<p>Glance <a href="https://launchpad.net/glance/+milestone/diablo-4">gained</a> final
<a href="https://blueprints.launchpad.net/glance/+spec/authentication">integration with the Keystone common authentication
system</a>,
support for <a href="https://blueprints.launchpad.net/glance/+spec/shared-images">sharing images between groups of
tenants</a>, a
new <a href="https://blueprints.launchpad.net/glance/+spec/glance-notifications">notification
system</a>
and <a href="https://blueprints.launchpad.net/glance/+spec/i18n">i18n</a>.
<a href="https://launchpad.net/nova/+milestone/diablo-4">Twelve</a> feature
blueprints were completed in Nova, including final <a href="https://blueprints.launchpad.net/nova/+spec/finalize-nova-auth">Keystone
integration</a>,
the long-awaited capacity to <a href="https://blueprints.launchpad.net/nova/+spec/boot-from-volume">boot from
volumes</a>,
a <a href="https://blueprints.launchpad.net/nova/+spec/configuration-drive">configuration
drive</a>
to pass information to instances,
<a href="https://blueprints.launchpad.net/nova/+spec/linuxnet-vif-plugging">integration</a>
<a href="https://blueprints.launchpad.net/nova/+spec/nova-quantum-vifid">points</a>
for Quantum, KVM <a href="https://blueprints.launchpad.net/nova/+spec/kvm-block-migration">block migration
support</a>,
as well as
<a href="https://blueprints.launchpad.net/nova/+spec/os-security-groups">several</a>
<a href="https://blueprints.launchpad.net/nova/+spec/add-remove-securitygroup-instance">improvements</a>
<a href="https://blueprints.launchpad.net/nova/+spec/add-options-network-create-os-apis">to</a>
the OpenStack API.</p>
<p>Diablo-4 is mostly feature-complete: a few blueprints for standalone
features were granted an exception and will land post-diablo-4, like
volume types or virtual storage arrays <a href="https://launchpad.net/nova/+milestone/diablo-rbp">in
Nova</a>, or like SSL
support <a href="https://launchpad.net/glance/+milestone/diablo-rbp">in Glance</a>.</p>
<p>Now we race towards the release branch point (September 8th) which is
when the Diablo release branch will start to diverge from a newly-open
Essex development branch. The focus is on testing, bug fixing and
consistency... up until September 22, the Diablo release day.</p>Elite committers vs. Gated trunk2011-08-12T15:26:00+02:002011-08-12T15:26:00+02:00Thierry Carreztag:ttx.re,2011-08-12:/gated-trunk.html<p>How to control what gets into your open source project code ? The
classic model, inherited from pre-DVCS days, is to have a set of
"committers" that are trusted with direct access while the vast majority
of project "contributors" must kindly ask them to sponsor their patches.
You can find that …</p><p>How to control what gets into your open source project code ? The
classic model, inherited from pre-DVCS days, is to have a set of
"committers" that are trusted with direct access while the vast majority
of project "contributors" must kindly ask them to sponsor their patches.
You can find that model in a lot of projects, including most Linux
distributions. This model doesn't scale that well -- even trusted
individuals are error-prone, nobody should escape peer review. But the
main issue is the binary nature of the committer power: it divides your
community (us vs. them) and does not really encourage contribution.</p>
<h3>Gated trunk</h3>
<p>The solution to this is to implement a gated trunk with a code review
system like GitHub pull requests or Launchpad branch merge proposals.
Your "committers" become "core developers" that have a casting vote on
whether the proposal should be merged. Everyone goes through the peer
review process, and the peer review process is open for everyone: your
"contributors" become "developers" that can comment too. You reduce the
risk of human error and the community is much healthier, but some issues
remain: your core developers can still (wittingly or unwittingly) evade
peer review, and the final merge process is human and error-prone.</p>
<h3>Automation ftw</h3>
<p>The solution is to add more automation, and not trust humans with direct
repository access anymore. An "automated gated trunk" bot can watch for
reviews and when a set of pre-defined rules are met (human approvals,
testsuites passed, etc.), trigger the trunk merge automatically. This
removes human error from the process, and effectively turns your "core
developers" into "reviewers". This last aspect makes for a very healthy
development community: there is no elite group anymore, just a developer
subgroup with additional review duties.</p>
<h3>Gerrit</h3>
<p>In OpenStack, we used Tarmac in conjunction with Launchpad/bzr code
review in our first attempt to implement this. As we considered
migration to git, the lack of support for tracking formal approvals in
GitHub code review prevented the implementation of a complex automated
gated trunk on top of GitHub, so we deployed Gerrit. I was a bit
resisting the addition of a new tool to our toolset mix, but the
incredible Monty Taylor and Jim Blair did a great integration job, and I
realize now that this gives us a lot more flexibility and room for
future evolution. For example I like that some tests can be run when the
change is proposed, rather than only after the change is approved (which
results in superfluous review roundtrips).</p>
<p>At the end of the day, gated trunk automation helps in having a
welcoming, non-elitist (and lazy) developer community. I wish more
projects, especially distributions, would adopt it.</p>Summer of OpenStack: the diablo-3 milestone2011-07-29T12:46:00+02:002011-07-29T12:46:00+02:00Thierry Carreztag:ttx.re,2011-07-29:/the-diablo-3-milestone.html<p>No rest for the OpenStack developers, today saw the release of the July
development efforts for Nova and Glance: the Diablo-3 milestone.</p>
<p>Glance gained two performance options: API servers can now <a href="https://blueprints.launchpad.net/glance/+spec/local-image-cache">cache image
data on the local
filesystem</a>,
and a <a href="https://blueprints.launchpad.net/glance/+spec/delayed-delete">delayed
delete</a>
feature allows image deletion to happen asynchronously.</p>
<p>With …</p><p>No rest for the OpenStack developers, today saw the release of the July
development efforts for Nova and Glance: the Diablo-3 milestone.</p>
<p>Glance gained two performance options: API servers can now <a href="https://blueprints.launchpad.net/glance/+spec/local-image-cache">cache image
data on the local
filesystem</a>,
and a <a href="https://blueprints.launchpad.net/glance/+spec/delayed-delete">delayed
delete</a>
feature allows image deletion to happen asynchronously.</p>
<p>With a bit more than 100 trunk commits over the month, Nova gained
support for <a href="https://blueprints.launchpad.net/nova/+spec/nova-multi-nic">multiple
NICs</a>,
FlatDHCP network mode now support a
<a href="https://blueprints.launchpad.net/nova/+spec/ha-flatdhcp">high-availability</a>
option (read more about it
<a href="http://unchainyourbrain.com/openstack/13-networking-in-nova">here</a>),
instances can be
<a href="https://blueprints.launchpad.net/nova/+spec/instance-migration">migrated</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/system-usage-records">system usage
notifications</a>
were added to the notification framework. Network code was also
<a href="https://blueprints.launchpad.net/nova/+spec/network-refactoring">refactored</a>
in order to facilitate integration with the new networking projects, and
countless fixes were made in <a href="https://blueprints.launchpad.net/nova/+spec/openstack-compute-api-11-finalization">OpenStack API 1.1
support</a>.</p>
<p>We have one more milestone left (diablo-4) before the final 2011.3
release... still a lot to do !</p>June in OpenStack: the diablo-2 milestone2011-07-04T09:43:00+02:002011-07-04T09:43:00+02:00Thierry Carreztag:ttx.re,2011-07-04:/the-diablo-2-milestone.html<p>About a month ago I commented on the features delivered in the diablo-1
milestone. Last week we released the diablo-2 milestone for your testing
and feature evaluation pleasure.</p>
<p>Most of the <a href="https://launchpad.net/glance/diablo/diablo-2">changes to
Glance</a> were made under
the hood. In particular the <a href="https://blueprints.launchpad.net/glance/+spec/wsgi-refactoring">new WSGI
code</a>
from Nova was ported to …</p><p>About a month ago I commented on the features delivered in the diablo-1
milestone. Last week we released the diablo-2 milestone for your testing
and feature evaluation pleasure.</p>
<p>Most of the <a href="https://launchpad.net/glance/diablo/diablo-2">changes to
Glance</a> were made under
the hood. In particular the <a href="https://blueprints.launchpad.net/glance/+spec/wsgi-refactoring">new WSGI
code</a>
from Nova was ported to Glance, and images collections can now be
<a href="https://blueprints.launchpad.net/glance/+spec/api-results-ordering">sorted</a>
by a subset of the image model attributes. Most of the groundwork to
support Keystone authentication was done, but that should only be
available in diablo-3 !</p>
<p>Those same initial <a href="https://blueprints.launchpad.net/nova/+spec/integrate-nova-authn">Keystone
integration</a>
steps were also done for Nova, along with plenty of other features. We
now support <a href="https://blueprints.launchpad.net/nova/+spec/distributed-scheduler">distributed
scheduling</a>
for complex deployments, together with a new <a href="https://blueprints.launchpad.net/nova/+spec/nova-instance-referencing">instance referencing
model</a>.
Was also added during this timeframe: support for <a href="https://blueprints.launchpad.net/nova/+spec/openstack-api-floating-ips">floating
IPs</a>
(in OpenStack API), a basic mechanism for <a href="https://blueprints.launchpad.net/nova/+spec/notification-system">pushing notifications
out</a> to
interested parties, <a href="https://blueprints.launchpad.net/nova/+spec/provider-firewall">global firewall
rules</a>,
and an <a href="https://blueprints.launchpad.net/nova/+spec/schedule-instances-on-heterogeneous-architectures">instance type extra
specs</a>
table that can be used in a capabilities-aware scheduler. More invisible
to the user, we completed efforts to <a href="https://blueprints.launchpad.net/nova/+spec/error-codes">standardize error
codes</a> and
refactored the OpenStack API
<a href="https://blueprints.launchpad.net/nova/+spec/nova-api-serialization">serialization</a>
mechanism.</p>
<p>And there is plenty more coming up in diablo-3... scheduled for release
on July 28th.</p>Time-based releases are good for community2011-07-01T10:11:00+02:002011-07-01T10:11:00+02:00Thierry Carreztag:ttx.re,2011-07-01:/time-based-good-for-community.html<p>There was a bit of
<a href="http://tlohg.com/distributions-projects-and-releases">discussion</a>
lately on feature-based vs. time-based release schedules in OpenStack,
and here are my thoughts on it. In a feature-based release cycle, you
release when a given set of features is implemented, while in a
time-based release cycle, you release at a predetermined date, with …</p><p>There was a bit of
<a href="http://tlohg.com/distributions-projects-and-releases">discussion</a>
lately on feature-based vs. time-based release schedules in OpenStack,
and here are my thoughts on it. In a feature-based release cycle, you
release when a given set of features is implemented, while in a
time-based release cycle, you release at a predetermined date, with
whatever is ready at the time.</p>
<h3>Release early, release often</h3>
<p>One the basic principles in open source (and agile) is to release early
and release often. This allows fast iterations, which avoid the classic
drawbacks of <a href="http://en.wikipedia.org/wiki/Waterfall_model">waterfall
development</a>. If you push
that logic to the extreme, you can release at every commit: that is what
continuous deployment is about. Continuous deployment is great for web
services, where there is only one deployment of the software and it runs
the latest version.</p>
<p>OpenStack projects actually provide builds (and packages) for every
commit made to development trunk, but we don't call them releases. For
software that has multiple deployers, having "releases" that combine a
reasonable amount of new features and bugfixes is more appropriate.
Hence the temptation of doing feature-based releases: release often,
whenever the next significant feature is ready.</p>
<h3>Frequent feature-based releases</h3>
<p>The main argument of supporters of frequent feature-based releases is
that time-based cycles are too long, so they delay the time it takes for
a given feature to be available to the public. But time-based isn't
about "a long time". It's about "a predetermined amount of time". You
can make that "predetermined amount of time" as small as needed...</p>
<p>Supporters of feature-based releases say that time-based releases are
good for distributions, since those have limited bearing on the release
cycles of their individual subcomponents. I'd argue that time-based
releases are always better, for anyone that wants to do open development
in a community.</p>
<h3>Time-based releases as a community enabler</h3>
<p>If you work with a developer community rather than with a single-company
development group, the project doesn't have full control over its
developers, but just limited influence. Doing feature-based releases is
therefore risky, since you have no idea how long it will take to have a
given feature implemented. It's better to have frequent time-based
releases (or milestones), that regularly delivers to a wider audience
what happens to be implemented at a given, predetermined date.</p>
<p>If you work with an open source community rather than with a
single-company product team, you want to help the different separate
stakeholders to synchronize. Pre-announced release dates allow everyone
(developers, testers, distributions, users, marketers, press...) to be
on the same line, following the same cadence, responding to the same
rhythm. It might be convenient for developers to release "whenever it
makes sense", but the wider community benefits from having predictable
release dates.</p>
<p>It's no wonder that most large open source development communities
switched from feature-based releases to time-based releases: it's about
the only way to "release early, release often" with a large community.
And since we want the OpenStack community to be as open and as large as
possible, we should definitely continue to do time-based releases, and
to announce the schedule as early as we can.</p>A month in OpenStack Diablo: the diablo-1 milestone2011-06-02T08:02:00+02:002011-06-02T08:02:00+02:00Thierry Carreztag:ttx.re,2011-06-02:/the-diablo-1-milestone.html<p>Back at the OpenStack Design Summit in Santa Clara, we decided to switch
from a 3-month cycle to a 6-month coordinated release cycle, with more
frequent milestones delivery in the middle.</p>
<p>Lately we have been busy adapting the release processes to match the
delivery of the first milestones. Swift 1 …</p><p>Back at the OpenStack Design Summit in Santa Clara, we decided to switch
from a 3-month cycle to a 6-month coordinated release cycle, with more
frequent milestones delivery in the middle.</p>
<p>Lately we have been busy adapting the release processes to match the
delivery of the first milestones. Swift 1.4.0 was released last Tuesday,
and today sees the release of the diablo-1 milestone for Nova and
Glance.</p>
<p>What should you expect from diablo-1, just 4 weeks after the design
summit ? In this short timeframe lots of features have been worked on,
and the developers managed to land quite a few of them in time for
diablo-1.</p>
<p>Glance's API was improved to support <a href="https://blueprints.launchpad.net/glance/+spec/api-results-filtering">filtering of /images and
/images/detail
results</a>
and <a href="https://blueprints.launchpad.net/glance/+spec/api-limited-results">limiting and paging of
results</a>.
This made <a href="https://blueprints.launchpad.net/glance/+spec/api-versioning">support of API
versioning</a>
necessary. It also grew a <a href="https://blueprints.launchpad.net/glance/+spec/iso-boot">new disk
format</a> ("iso")
that should ultimately allow to boot ISOs directly in Nova.</p>
<p>On Nova's side, the most notable addition is support for
<a href="https://blueprints.launchpad.net/nova/+spec/snapshot-volume">snapshotting</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/clone-volume">cloning</a>
volumes with the EC2 API. The XenServer plugin now supports <a href="https://blueprints.launchpad.net/nova/+spec/xs-ovs">Open
vSwitch</a>, and pause
and suspend capabilities have been <a href="https://blueprints.launchpad.net/nova/+spec/kvm-pause-suspend">added to the KVM
hypervisor</a>.</p>
<p>Now keep your seatbelt fastened, because diablo-2 is set to release on
June 30th.</p>OpenStack Nova: Main themes for Diablo2011-05-20T12:47:00+02:002011-05-20T12:47:00+02:00Thierry Carreztag:ttx.re,2011-05-20:/openstack-nova-main-themes-for-diablo.html<p>A few weeks after the OpenStack Design Summit in Santa Clara, we are
starting to get a better picture of <a href="https://blueprints.launchpad.net/nova/diablo">what should
be</a> in the next version of
OpenStack Nova, codenamed <em>Diablo</em>, <a href="http://wiki.openstack.org/DiabloReleaseSchedule">scheduled for
release</a> on September
22.</p>
<p>One big priority of this release is to <a href="https://blueprints.launchpad.net/nova/+spec/separate-code-for-services">separate
code</a>
for the …</p><p>A few weeks after the OpenStack Design Summit in Santa Clara, we are
starting to get a better picture of <a href="https://blueprints.launchpad.net/nova/diablo">what should
be</a> in the next version of
OpenStack Nova, codenamed <em>Diablo</em>, <a href="http://wiki.openstack.org/DiabloReleaseSchedule">scheduled for
release</a> on September
22.</p>
<p>One big priority of this release is to <a href="https://blueprints.launchpad.net/nova/+spec/separate-code-for-services">separate
code</a>
for the network and volume services, and <a href="https://blueprints.launchpad.net/nova/+spec/network-refactoring">refactor nova-network
code</a>
to add a <a href="https://blueprints.launchpad.net/nova/+spec/implement-network-api">clear internal
API</a>.
This will allow to plug separate
<a href="https://blueprints.launchpad.net/nova/+spec/integrate-network-services">network</a>
and
<a href="https://blueprints.launchpad.net/nova/+spec/integrate-block-storage">volume</a>
service providers, and pave the way for integration with future
OpenStack projects like
<a href="https://launchpad.net/netstack">Quantum/Melange/Donabe</a> and
<a href="https://launchpad.net/lunr">LunR</a>. In preparation for this, we'll push
changes to <a href="https://blueprints.launchpad.net/nova/+spec/no-db-messaging">rely more on the
queue</a> (and
less on the database) to pass information between components. In the
same area, we need some more changes to <a href="https://blueprints.launchpad.net/nova/+spec/nova-multi-nic">support multiple
NICs</a> and
should also provide a client OpenStack API for directly <a href="https://blueprints.launchpad.net/nova/+spec/implement-volume-api">interacting
with
volumes</a>.</p>
<p>A second theme of the Diablo release is the new <a href="https://blueprints.launchpad.net/nova/+spec/distributed-scheduler">distributed
scheduler</a>,
which should be able to schedule across zones and taking capabilities
into account. This will need changes in the way we <a href="https://blueprints.launchpad.net/nova/+spec/nova-instance-referencing">reference
instances</a>,
as well as some changes for <a href="https://blueprints.launchpad.net/nova/+spec/ec2-id-compatibilty">EC2 API
compatibility</a>.</p>
<p>On the API side, we should <a href="https://blueprints.launchpad.net/nova/+spec/openstack-compute-api-11-finalization">finalize OpenStack API
1.1</a>
support, including work on <a href="https://blueprints.launchpad.net/nova/+spec/openstack-api-floating-ips">floating
IPs</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/shared-ip-groups">shared IP
groups</a>.
For administrators, <a href="https://blueprints.launchpad.net/nova/+spec/instance-migration">instance
migration</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/admin-account-actions">account administration
actions</a>
should be added. We'll also ensure good AWS API
<a href="https://blueprints.launchpad.net/nova/+spec/reasonable-aws-compatibility">compatibility</a>
and
<a href="https://blueprints.launchpad.net/nova/+spec/aws-api-validation">validation</a>.</p>
<p>Support for
<a href="https://blueprints.launchpad.net/nova/+spec/snapshot-volume">snapshotting</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/clone-volume">cloning</a> and
<a href="https://blueprints.launchpad.net/nova/+spec/boot-from-volume">booting</a>
from volumes should land early in this cycle, as well as
<a href="https://blueprints.launchpad.net/nova/+spec/configuration-drive">new</a>
<a href="https://blueprints.launchpad.net/nova/+spec/instance-transport">ways</a>
of communicating configuration data between host and guest. We also need
to
<a href="https://blueprints.launchpad.net/nova/+spec/integrate-nova-authn">integrate</a>
<a href="https://blueprints.launchpad.net/nova/+spec/finalize-nova-auth">with</a>
AuthN/AuthZ with the new common Keystone authentication system. Lots of
other features are planned (and others might be added before the end),
you can check out the <a href="https://blueprints.launchpad.net/nova/diablo">blueprints
plan</a> for more detail.</p>
<p>Last but not least, on the QA side, we should have <a href="https://blueprints.launchpad.net/nova/+spec/testing-jenkins-integration">continuous automated
testing</a>
across a range of <a href="https://blueprints.launchpad.net/nova/+spec/reference-architectures">reference
architectures</a>
and increase our <a href="https://blueprints.launchpad.net/nova/+spec/diablo-testing">unittest and smoketest
coverage</a>
among
<a href="https://blueprints.launchpad.net/nova/+spec/engineer-in-quality">other</a>
<a href="https://blueprints.launchpad.net/nova/+spec/libvirt-refactoring">efforts</a>
<a href="https://blueprints.launchpad.net/nova/+spec/nova-api-serialization">to</a>
build-in quality.</p>
<p>The first milestone for this cycle,
<a href="https://launchpad.net/nova/+milestone/diablo-1">diablo-1</a>, should be
released on June 2nd.</p>OpenStack @ Ubuntu Developer Summit2011-05-18T12:53:00+02:002011-05-18T12:53:00+02:00Thierry Carreztag:ttx.re,2011-05-18:/openstack-ubuntu-developer-summit.html<p>Last week I attended the <a href="http://uds.ubuntu.com/">Ubuntu Developer Summit for
Oneiric</a> in Budapest. This was the first time I
attended UDS as an upstream representative rather than as a Canonical
employee. I very much enjoyed it: not being a track lead or a busy
technical lead actually gives you desirable flexibility …</p><p>Last week I attended the <a href="http://uds.ubuntu.com/">Ubuntu Developer Summit for
Oneiric</a> in Budapest. This was the first time I
attended UDS as an upstream representative rather than as a Canonical
employee. I very much enjoyed it: not being a track lead or a busy
technical lead actually gives you desirable flexibility in your agenda
:)</p>
<p>First of all, a quick comment on the big announcement of the week, which
the Twittersphere is not done retweeting yet: "Ubuntu switching from
Eucalyptus to OpenStack". I think it would be more accurate to say that
Ubuntu chose to use OpenStack as its <em>default</em> cloud stack for future
versions. Comparing Eucalyptus and OpenStack is like comparing apples to
apple trees: OpenStack provides several cloud infrastructure pieces (of
which only OpenStack Compute -Nova- covers the same space as
Eucalyptus). I suspect the wide scope of the project played a role in
OpenStack being selected as the default stack for the future. Eucalyptus
and OpenStack Nova should both be present as deployment options from
11.10 on.</p>
<p>On the UDS format itself, I'd say that the "one blueprint = one hour"
format does not scale that well. The numbers of hours in the week is
fixed, so when the project grows you end up having too many sessions
going on at the same time. Lots of blueprints do not require one hour of
discussion, but rather a quick presentation of plan, feedback from
interested parties and Q&A. That's what we do for our own Design
Summits, but I'd admit it makes scheduling a bit more complex. On the
good side, having the floor plan inside our UDS badges was a really good
idea, especially with confusing room names :)</p>
<p>The Launchpad and bzr guys were very present during the week, attentive
and reactive to the wishes of upstream projects. They have great
improvements and features coming up, including <a href="http://blog.launchpad.net/bug-tracking/beta-squadron-engage-better-bug-subscriptions">finer-grained
bugmail</a>
and dramatic <a href="http://bazaarvcs.wordpress.com/2011/05/17/faster-large-tree-handling/">speed
improvements</a>
in bzr handling of large repos.</p>
<p>Last week also saw the rise of creampiesourcing: motivation of groups of
developers over bets ("if the number of critical bugs for Launchpad goes
to 0 by June 27, I'll <a href="http://blog.launchpad.net/general/a-cream-pie-in-the-face">take a
creampie</a>in
the face"). Seems to work better than karma points.</p>
<p>Finally, Rackspace Hosting was co-sponsoring the "meet and greet" event
on the Monday night, promoting OpenStack. I think offering cool
T-shirts, like we did at the previous UDS in Orlando, was more efficient
in spreading the word and making the project "visible" over time: in
Budapest you could see a lot of people wearing the OpenStack T-shirts we
offered back then !</p>Diablo Design Summit2011-04-21T13:24:00+02:002011-04-21T13:24:00+02:00Thierry Carreztag:ttx.re,2011-04-21:/diablo-design-summit.html<p>Just a few days from now, lots of OpenStack community members from
around the world will gather in (hopefully sunny) Santa Clara for the
OpenStack Conference and Design Summit. As I explained
<a href="http://www.openstack.org/blog/2011/04/what-to-expect-from-the-conference-and-design-summit/">here</a>,
those are two co-hosted events, and here are a few precisions on the
Design Summit part.</p>
<p>The …</p><p>Just a few days from now, lots of OpenStack community members from
around the world will gather in (hopefully sunny) Santa Clara for the
OpenStack Conference and Design Summit. As I explained
<a href="http://www.openstack.org/blog/2011/04/what-to-expect-from-the-conference-and-design-summit/">here</a>,
those are two co-hosted events, and here are a few precisions on the
Design Summit part.</p>
<p>The Design summit starts on Wednesday at 9am and ends on Friday at 5pm.
Apart from the 25-minute opening plenary, we have three types of
sessions: design discussions, unconference sessions and lightning talks.</p>
<p><em>Design sessions</em> are the meat of the Design Summit. Those 25-minute or
55-minute sessions were selected from proposals coming from developers
working on a feature for future releases of OpenStack. They are
organized in 8 different tracks:</p>
<ul>
<li>OpenStack infrastructure: discussions affecting all the projects</li>
<li>Nova APIs: discussions around the OpenStack and EC2 API in Nova</li>
<li>Nova volumes: discussions around volumes / block storage in Nova</li>
<li>Nova networking: discussions around networking in Nova and the
proposed Network as a Service project</li>
<li>Nova core: other discussions on Nova</li>
<li>Glance: discussions on the Glance Image service project</li>
<li>Swift: discussions on the Swift Object Storage project</li>
<li>Other projects: discussions on OpenStack incubating projects</li>
</ul>
<p>You can find a tentative schedule, organized by day or track, at the
following URL: <a href="http://summit.openstack.org/ods-d/">http://summit.openstack.org/ods-d/</a>. You should expect
it to change by next week though, as we tune it to ensure required
people are present where they are needed. So refresh often !</p>
<p>The second type of sessions in <em>unconference sessions</em>. On the Thursday
and the Friday, we'll have an openly-scheduled room available for all
types of presentations or discussions. We'll have a big whiteboard with
empty 30-minute slots: just mark your name and session title in your
preferred time slot. We expect quite a few educational presentations, as
well as discussions around peripheral projects to happen there. So watch
that space !</p>
<p>Finally, from Wednesday to Friday, after lunch and before the design
sessions restart, we'll have 25min of <em>lightning talks</em>. These will also
be openly-scheduled, but in 5-minute increments. Anything
loosely-connected to OpenStack is relevant, so step up and use your 5
minutes of glory :)</p>
<p>All the project technical leads and myself hope that this mix of
sessions will allow us to make the most of those three days together.
See you there !</p>Cactus is done, now Diablo2011-04-18T09:57:00+02:002011-04-18T09:57:00+02:00Thierry Carreztag:ttx.re,2011-04-18:/cactus-is-done-now-diablo.html<p>Our 2011.2 release, codenamed "Cactus", was finally released early on
April 15. The Diablo merge window opened a few hours later, and in 8
days the developer community will gather in Santa Clara to discuss what
will be in it.</p>
<p>If you want to develop a feature for Diablo …</p><p>Our 2011.2 release, codenamed "Cactus", was finally released early on
April 15. The Diablo merge window opened a few hours later, and in 8
days the developer community will gather in Santa Clara to discuss what
will be in it.</p>
<p>If you want to develop a feature for Diablo and get your design reviewed
or discussed at the summit, remember that you have until the <strong>end of
Tuesday, April 19</strong> to submit a design summit session. The procedure to
follow is simple, please see <a href="http://wiki.openstack.org/Summit">http://wiki.openstack.org/Summit</a> for more
details.</p>
<p>Your session blueprint will be reviewed as soon as possible by the track
leads. Given the very large number of sessions submitted, we might try
to group together related blueprints for a common discussion, or have to
refuse a few sessions. The April 19 is a soft deadline: you can still
submit sessions after that date, but there is no guarantee we will
review it, and they will be given less priority that the ones submitted
early.</p>
<p>See you all there !</p>OpenStack Cactus BMPFreeze report2011-03-18T10:06:00+01:002011-03-18T10:06:00+01:00Thierry Carreztag:ttx.re,2011-03-18:/openstack-cactus-bmpfreeze-report.html<p>Our <a href="http://wiki.openstack.org/ReleaseCycle">time-based release cycles</a>
are cadenced by a number of freezes and milestones, and we just
passed <a href="http://wiki.openstack.org/BranchMergeProposalFreeze">BranchMergeProposalFreeze</a>
for the <a href="http://wiki.openstack.org/CactusReleaseSchedule">Cactus
cycle</a>. Feature
branches should have been proposed by now, so let's see how well we did
compared to the <a href="http://wiki.openstack.org/releasestatus/">original plan</a>
we had for this release:</p>
<ul>
<li>Essential specs: 3 …</li></ul><p>Our <a href="http://wiki.openstack.org/ReleaseCycle">time-based release cycles</a>
are cadenced by a number of freezes and milestones, and we just
passed <a href="http://wiki.openstack.org/BranchMergeProposalFreeze">BranchMergeProposalFreeze</a>
for the <a href="http://wiki.openstack.org/CactusReleaseSchedule">Cactus
cycle</a>. Feature
branches should have been proposed by now, so let's see how well we did
compared to the <a href="http://wiki.openstack.org/releasestatus/">original plan</a>
we had for this release:</p>
<ul>
<li>Essential specs: 3 merged, 2 proposed (100% proposed in time)</li>
<li>High-prio specs: 5 merged, 3 proposed, 3 deferred (73%)</li>
<li>Medium-prio specs: 11 merged, 5 proposed, 4 late, 2 deferred (73%)</li>
<li>Low-prio specs: 7 merged, 1 proposed, 6 late, 2 deferred (50%)</li>
</ul>
<p>Compared to the <a href="https://ttx.re/openstack-bexar-bmpfreeze-report.html">Bexar
cycle</a>,
the results are comparable (with less success on High specs, better
success on Medium specs). With 54 specs targeted (compared to 42 in
Bexar), that's a great achievement, so congratulations to all the
developers !</p>
<p>Now it's time to concentrate on review and get those proposed branches
merged before <a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a>,
which happens at end of day March 24.</p>Coming up in OpenStack Cactus...2011-03-16T16:06:00+01:002011-03-16T16:06:00+01:00Thierry Carreztag:ttx.re,2011-03-16:/coming-up-in-openstack-cactus.html<p>In a bit more than a week, we will hit
<a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a> for OpenStack
"Cactus" cycle, so we start to have a good idea of what new features
will make it. The Cactus cycle focus was on stability, so there are
fewer new features compared to Bexar, but the developers still …</p><p>In a bit more than a week, we will hit
<a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a> for OpenStack
"Cactus" cycle, so we start to have a good idea of what new features
will make it. The Cactus cycle focus was on stability, so there are
fewer new features compared to Bexar, but the developers still achieved
a lot in a couple of months...</p>
<h3>Swift (OpenStack object storage)</h3>
<p>The Swift team really focused and stability and performance improvements
this cycle. I will just single out the refactoring of the proxy to
make <a href="https://blueprints.launchpad.net/swift/+spec/cactus-asynchronous-proxy">backend requests
concurrent</a>,
and improvements on <a href="https://blueprints.launchpad.net/swift/+spec/cactus-improved-sqlite3-indexing">sqlite3
indexing</a>
as good examples of this effort.</p>
<h3>Glance (OpenStack image registry and delivery service)</h3>
<p>Bexar saw the first release of Glance, and in Cactus it was vastly
improved to match standards we have for the rest of OpenStack:
<a href="https://blueprints.launchpad.net/glance/+spec/logging">logging</a>,
<a href="https://blueprints.launchpad.net/glance/+spec/use-config-parser">configuration</a>
and
<a href="https://blueprints.launchpad.net/glance/+spec/use-optparse">options</a>
parsing, use of
<a href="https://blueprints.launchpad.net/glance/+spec/paste-deploy">paste.deploy</a>
and <a href="https://blueprints.launchpad.net/glance/+spec/non-static-versioning">non-static
versioning</a>,
database
<a href="https://blueprints.launchpad.net/glance/+spec/registry-db-migration">migrations</a>...
New features include a <a href="https://blueprints.launchpad.net/glance/+spec/cli-tool">CLI
tool</a> and a new
method for client to <a href="https://blueprints.launchpad.net/glance/+spec/image-checksumming">verify
images</a>.
Glance developers might also sneak in an <a href="https://blueprints.launchpad.net/glance/+spec/middleware-authentication">authentication
middleware</a>
and support for <a href="https://blueprints.launchpad.net/glance/+spec/support-ssl">HTTPS
connections</a>
!</p>
<h3>Nova (OpenStack compute)</h3>
<p>A lot of the feature work in Nova for Cactus revolved around the
<a href="https://blueprints.launchpad.net/nova/+spec/openstack-api-1-1">OpenStack API
1.1</a> and
exposing features through XenServer
(<a href="https://blueprints.launchpad.net/nova/+spec/xs-migration">migration</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/xs-resize">resize</a>, <a href="https://blueprints.launchpad.net/nova/+spec/xs-rescue">rescue
mode</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/xs-ipv6">IPv6</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/xs-fileinject">file</a> and
<a href="https://blueprints.launchpad.net/nova/+spec/xs-inject-networking">network</a>
injection...). We should also have the long-awaited <a href="https://blueprints.launchpad.net/nova/+spec/cactus-migration-live">live
migration</a>
feature (for KVM), support for <a href="https://blueprints.launchpad.net/nova/+spec/bexar-nova-containers">LXC
containers</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/unified-images">VHD
images</a>,
<a href="https://blueprints.launchpad.net/nova/+spec/multi-nic">multiple</a><a href="https://blueprints.launchpad.net/nova/+spec/multinic-libvirt">NICs</a>,
dynamically-configured <a href="https://blueprints.launchpad.net/nova/+spec/configure-instance-types-dynamically">instance
flavors</a>
or volume storage on <a href="https://blueprints.launchpad.net/nova/+spec/support-hp-san">HP/Lefthand
SANs</a>.
XenAPI should get support for <a href="https://blueprints.launchpad.net/nova/+spec/xenapi-vlan-network-manager">Vlan network
manager</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/xenapi-basic-network-injection">network
injection</a>.
We hope support for <a href="https://blueprints.launchpad.net/nova/+spec/hypervisor-vmware-vsphere-support">VMWare/vSphere
hypervisor</a>
will make it.</p>
<p>The rest of the Nova team concentrated on testing, bugfixing (already
115 bugfixes committed to Cactus !) and producing a coherent release, as
evidenced by the work on adding the missing <a href="https://blueprints.launchpad.net/nova/+spec/cactus-flatmanager-ipv6-support">Ipv6 support for
FlatManager</a>
network model. I should also mention that the groundwork for
<a href="https://blueprints.launchpad.net/nova/+spec/multi-tenant-accounting">multi-tenant
accounting</a>
and <a href="https://blueprints.launchpad.net/nova/+spec/multi-cluster-in-a-region">multiple clusters in a
region</a>
also landed in Cactus.</p>
<p>Over the three projects branches, last month we had <em>more than 2500
commits</em> by more than 75 developers. Not too bad for a project less than
one-year-old... We'll see the result of this work on Cactus release day,
scheduled April 14.</p>OpenStack Project Technical Leads (PTLs)2011-03-09T11:34:00+01:002011-03-09T11:34:00+01:00Thierry Carreztag:ttx.re,2011-03-09:/openstack-project-technical-leads-ptls.html<p>I'd like to quickly precise what will be expected, from a release
management perspective, from the Project Technical Leads (PTLs) some of
us will nominate and elect in the following weeks.</p>
<p>PTLs, like it says on the tin, will have to technically lead each
project. That comes with a few …</p><p>I'd like to quickly precise what will be expected, from a release
management perspective, from the Project Technical Leads (PTLs) some of
us will nominate and elect in the following weeks.</p>
<p>PTLs, like it says on the tin, will have to technically lead each
project. That comes with a few rights, but also with a lot of duties
that the candidates shouldn't underestimate...</p>
<p>Rights include the ability to decide between conflicting opinions on a
technical debate, or the authority to set the theme for the next
release. Exercising those rights will only be possible if the technical
opinions of the elected lead are widely respected in the project.</p>
<p>Duties of the PTLs, from a release management perspective, mainly
include coming up with a plan for the next release(s). That means
staying on top of what everyone proposes, selecting (and rejecting)
blueprints for a given cycle, setting priorities, approving designs
(potentially with the help of other project drivers), or making sure we
don't duplicate work. The PTLs should also be very impacted by the
design summit preparation, making sure we have sessions for what we need
to discuss, and encouraging people to submit corresponding blueprints.</p>
<p>As release manager, I can help with the process, but the decisions must
come from the PTLs, who have the legitimacy of being elected. During the
cycle, I will then help in making sure the defined plan is on track.</p>
<p>With a well-established project like Swift or a relatively-small project
like Glance, the PTL work can certainly be done at the same time as
regular development. For Nova however, the PTL should expect project
coordination work to take a large part of his time, so he could find
himself not being able to write as much code as he would want. That
should be kept in mind before you accept nominations :)</p>
<p>Hoping this helps in clarifying expectations... Happy nominating and
voting !</p>Upstream projects vs. Distributions2011-02-28T11:08:00+01:002011-02-28T11:08:00+01:00Thierry Carreztag:ttx.re,2011-02-28:/upstream-projects-vs-distributions.html<p>You can globally split open source projects into two broad categories.
<em>Upstream</em> projects develop and publish source code for various
applications and features. <em>Downstream</em> projects are consumers of this
source code. The most common type of downstream projects are
<em>distributions</em>, which release ready-to-use binary packages of these
upstream applications, make …</p><p>You can globally split open source projects into two broad categories.
<em>Upstream</em> projects develop and publish source code for various
applications and features. <em>Downstream</em> projects are consumers of this
source code. The most common type of downstream projects are
<em>distributions</em>, which release ready-to-use binary packages of these
upstream applications, make sure they integrate well with the rest of
the system, and release security and bugfix updates according to their
maintenance policies.</p>
<p>The relationship between upstream projects and distributions is always a
bit difficult, because their roles overlap a bit. Since I'm sitting on
both sides of the fence, let's try to find common ground.</p>
<h3>Overlapping roles</h3>
<p>In an ideal world, everyone would install software through distribution
packages, and the roles wouldn't overlap. In the real world though,
upstream projects need to deal with distributions that don't provide
packages for your software, or provide old buggy versions with no
mechanism for getting fresh ones. That's why they need to care about
manual installation or update mechanisms. On the other hand, in their
rush to release fixes, distributions sometimes carry patches without
sending them upstream immediately. Both want to provide bugfix updates
to stable versions. In all cases the overlapping roles end up
duplicating work and creating unnecessary friction.</p>
<h3>Splitting the roles</h3>
<p>In my (humble) opinion, upstream projects should encourage the use of
packaged software wherever possible, rather than resisting it. They
should concentrate on their core competency: working on producing new
releases of their code. Dealing with distribution issues, environment
specificities or maintaining stable branches is a different type of
work, and one that distributions excel in. So the key seems to be in
splitting the roles more cleanly.</p>
<p>Upstream projects should release code, together with good documentation
on how to manually deploy it: dependencies, startup and upgrade
mechanisms, open bug trackers with links to patches... This
documentation can be reused by manual deployers and distribution
packagers alike. They should stop short of providing installers,
auto-updaters, dependency bundles, etc. They should limit the release of
point release updates only to critical issues (data loss, security...).</p>
<p>Distributions should be responsible for proper packaging (easy way to
install the software and its dependencies, together with startup scripts
and other system integration), and would be responsible for more general
bugfix updates that match their maintenance policy.</p>
<p>With such a split, you obviously <em>will</em> end up with a subpar user
experience if you try to manually install the software from the released
code. But you facilitate packaging, so you should end up being packaged
in more distributions. I think time is better spent contacting
distributions to get packaged rather than trying to improve the manual
installation to the point where it is actually usable.</p>
<h3>Freshness</h3>
<p>One case where you end up doing manual installations (even on supported
distributions) is to get the latest released code running on
already-released distributions. Due to stable release policies in
distributions, they will release bugfix updates for the version that was
available when they released, but usually won't provide a whole new
version of a package.</p>
<p>The solution is in specific distribution archives that track the latest
upstream releases (like PPAs in Ubuntu) and make them available for
users of already-released distributions. Those are usually
co-maintained between distributions and upstream projects.</p>
<h3>Reference distributions</h3>
<p>At this point, it is worth taking collaboration one step further, and
have developers that are involved in both projects ! Those can make sure
the distribution includes the packages and patches you need for your
software to run properly. Those can make sure the distribution is one on
which your software is up-to-date, runs properly and gets appropriate
bugfix updates. Those can maintain the specific distribution archives
for the latest upstream releases.</p>
<p>That distribution can then become a <em>reference distribution</em> for the
upstream project, one that is tightly integrated with the upstream
project and lives in harmony.</p>
<p>Two closing remarks:</p>
<ul>
<li>You can have multiple reference distributions. That said, one way to
limit friction and increase freshness is to have
somewhat-synchronized release cycles, which may not scale very well.</li>
<li>I realize the proposed role split and reference distro scheme might
not be generally applicable to all open source upstream projects. In
my experience it worked well with server software.</li>
</ul>
<p>In OpenStack, having a few Ubuntu core developers in the project (and
the Ubuntu server team supporting us) allows us to use Ubuntu as a
reference distribution. We have packages up for other distributions, but
those are not (yet) official distribution packages. Any other distro
developers interested to join ?</p>Agile vs. Open2011-01-21T10:53:00+01:002011-01-21T10:53:00+01:00Thierry Carreztag:ttx.re,2011-01-21:/agile-vs-open.html<p>I've been asked multiple times why open source project management does
not fully adopt agile methodologies, which are so great. Or what are the
main differences between the two.</p>
<h2>Agile is good for you</h2>
<p>So first of all, I'd like to say that I think Agile methodologies are
great. Their …</p><p>I've been asked multiple times why open source project management does
not fully adopt agile methodologies, which are so great. Or what are the
main differences between the two.</p>
<h2>Agile is good for you</h2>
<p>So first of all, I'd like to say that I think Agile methodologies are
great. Their primary value to me is to allow software development groups
to handle their stakeholders requirements in a sane way. By involving
developers more in the center of game, they contribute to use <em>Autonomy</em>
(one of the three main intrinsic motivators that Dan Pink mentions in
his book <em>Drive</em>) as a way to maximize a development team productivity.</p>
<h2>Agile vs. Open</h2>
<p>That said, applying pure Agile methods doesn't really work well for open
source project management. Some great concepts can be reused, like
frequent time-based releases, peer review, or test-driven development.
But most of the group tools assume a local, relatively small team. Doing
a morning stand-up meeting with a team of 60 in widely different
timezones is a bit difficult. It also assumes that project management
has some direct control over the developers (they can pick in the
backlog, but not outside), while there is no such thing in an open
development project.</p>
<p>The goals are also different. The main goal of Agile in my opinion is to
maximize a development team productivity. Optimizing the team velocity
so that the most can be achieved by a given-size team. The main goal of
Open source project management is not to maximize productivity. It's to
<strong>maximize contributions</strong>. Produce the most, with the largest group of
people possible.</p>
<p>That's why open source project management is all about setting the
framework and the rules of play (what can get in trunk and how), and
about trying to keep track of what is being done (to minimize confusion
and friction between groups of developers). That's why our release
cycles are slightly longer than Agile sprints, to have a cadence that is
more inclusive of development styles, and to enforce time to focus, as a
group, on QA before a release.</p>
<h2>Agile devs in Open source</h2>
<p>It's difficult for Agile developers to abandon their nice tools and
adopt seemingly-more-confusing open source bazaar ways. But in the end,
I think open source is more empowering, by addressing the two other Dan
Pink types of intrinsic motivators, <em>Purpose</em> and <em>Mastery</em>. Working on
an open source project and contributing to the world's amount of public
knowledge obviously gives an individual a sense of purpose to his work,
but even more important is mastery.</p>
<p>Each developer in an open source project actually represents himself.
With all proceedings and production being public, in the end his
personal name is attached to it. He builds mastery and influence over
the project by his own actions, not by the name of the company that pays
his bills. Of course his employer has requirements and usually pays him
to work on something specific, but the developer acts as the gateway to
get his employer's requirements into the open source project. That way
of handling stakeholders requirements places individual developers at
the very center of the game, even more than Agile does. You end up with
the highest number of highly-motivated individuals, which in turn leads
to lots of stuff getting done.</p>
<h2>Agile subteams</h2>
<p>Finally, nothing prevents an open source project to have Agile
development subgroups contributing to it. These subgroups can have user
stories, planning poker, feature backlogs, pair programming and stand-up
meetings. There are multiple challenges though. Aligning agile sprints
with the open source project's common development schedule is tricky.
The Agile work schedule needs to be adapted to make room for generic
open source project tasks like random code reviews or pre-release QA.
Some other group may end up implementing a feature from your internal
backlog, and communicating the backlog outside the group can be
bothersome and challenging.</p>
<p>I'd like to find ways, though. What do you think ? Can Agile and Open
live in harmony ? Should they try ?</p>OpenStack Bexar FeatureFreeze report2011-01-14T08:44:00+01:002011-01-14T08:44:00+01:00Thierry Carreztag:ttx.re,2011-01-14:/openstack-bexar-featurefreeze-report.html<p>The OpenStack Bexar release now
passed <a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a>.
Branches containing needed new features or changing the expected
behavior of the software can still require an exception and be proposed,
but their benefit will have to outweigh the regression risks they bring
for them to be part of the Bexar release.</p>
<p>Some …</p><p>The OpenStack Bexar release now
passed <a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a>.
Branches containing needed new features or changing the expected
behavior of the software can still require an exception and be proposed,
but their benefit will have to outweigh the regression risks they bring
for them to be part of the Bexar release.</p>
<p>Some insight on how well we managed to hit the objectives of this
deadline (% of specs that were merged in time for the freeze):</p>
<ul>
<li>Essential specs: 3 merged (100%)</li>
<li>High-prio specs: 14 merged, 1 still proposed (93%)</li>
<li>Medium-prio specs: 9 merged, 2 still proposed, 4 deferred (60%)</li>
<li>Low-prio specs: 8 merged, 2 still proposed, 2 deferred (66%)</li>
</ul>
<p>So we did a great job of getting High-prio proposed branches merged.
Thanks to a final push on the freeze day, we also managed to get most of
the Medium and Low priority branches in. A few previously-untargeted
branches made it, like <a href="https://blueprints.launchpad.net/nova/+spec/cow-instances">switching to CoW format by
default</a>,
support for <a href="https://blueprints.launchpad.net/nova/+spec/instance-avail-zones">availability
zones</a>
or <a href="https://blueprints.launchpad.net/nova/+spec/ceph-block-driver">CEPH
volumes</a>,
for a total of <a href="http://wiki.openstack.org/releasestatus/">45 targeted
specs</a>. The overall hit score
is just above 75%, which is amazing for that number of objectives.</p>
<p>Next stop is in 12
days, <a href="http://wiki.openstack.org/GammaFreeze">GammaFreeze</a> (Jan 25).
Until then, we need to get as many bugfixes in as possible. Now that
(most) feature branches have landed, it's time to put your QA suit on
and test, report and fix all issues you encounter. Let's make Bexar a
great release !</p>OpenStack Bexar BMPFreeze report2011-01-07T14:00:00+01:002011-01-07T14:00:00+01:00Thierry Carreztag:ttx.re,2011-01-07:/openstack-bexar-bmpfreeze-report.html<p>The OpenStack Bexar release now passed
<a href="http://wiki.openstack.org/BranchMergeProposalFreeze">BranchMergeProposalFreeze</a>.
Branches containing new features or changing the behavior of the
software can still require an exception and be proposed, but there is no
guarantee they will be accepted and be part of the Bexar release.</p>
<p>Some insight on how well we managed to …</p><p>The OpenStack Bexar release now passed
<a href="http://wiki.openstack.org/BranchMergeProposalFreeze">BranchMergeProposalFreeze</a>.
Branches containing new features or changing the behavior of the
software can still require an exception and be proposed, but there is no
guarantee they will be accepted and be part of the Bexar release.</p>
<p>Some insight on how well we managed to hit the objectives of this
deadline (% of specs that were proposed in time for the freeze):</p>
<ul>
<li>Essential specs: 3 merged (100%)</li>
<li>High-prio specs: 8 merged, 5-6 proposed, 2-1 late (87-93%)</li>
<li>Medium-prio specs: 5 merged, 4 proposed, 3 late, 3 deferred (60%)</li>
<li>Low-prio specs: 2 merged, 4 proposed, 3 late (66%)</li>
</ul>
<p>Given the very high number of specs targeted to Bexar, this is quite
good (I was expecting something around 100%, 75%, 50% and 25%). The
overall score is around 75%, which is amazing with <a href="http://wiki.openstack.org/releasestatus/">42 targeted
specs</a> in 3 months. Congrats
to all the developers ! We also had a few unexpected specs that made it,
we will retrospectively add them to the picture.</p>
<p>Next stop is next week,
<a href="http://wiki.openstack.org/FeatureFreeze">FeatureFreeze</a> (Jan 13): all
feature branches need to be merged so that we can safely switch to
QA/testing/bugfix gear, 3 weeks away from release. Given how well the
reviews go, I hope we will be able to sneak a few late branches in and
further improve the scores.</p>What will be in OpenStack Bexar release2010-12-22T08:45:00+01:002010-12-22T08:45:00+01:00Thierry Carreztag:ttx.re,2010-12-22:/what-will-be-in-openstack-bexar-release.html<p>OpenStack is busy with so much development activity it's hard to keep
up. <a href="http://wiki.openstack.org/releasestatus/">42 (!) specs</a> were
targeted for the 3-month long <a href="http://wiki.openstack.org/BexarReleaseSchedule">Bexar development
cycle</a>... and there are
more than 150 active branches. Over the last month alone, we saw 750
commits by 50 different people. Taking a step back, what …</p><p>OpenStack is busy with so much development activity it's hard to keep
up. <a href="http://wiki.openstack.org/releasestatus/">42 (!) specs</a> were
targeted for the 3-month long <a href="http://wiki.openstack.org/BexarReleaseSchedule">Bexar development
cycle</a>... and there are
more than 150 active branches. Over the last month alone, we saw 750
commits by 50 different people. Taking a step back, what new features
should you expect to land on February 3rd, in the Bexar release ?</p>
<h3>Swift (OpenStack object storage)</h3>
<p>The big news in Swift is support for unlimited object size, through the
implementation of <a href="https://blueprints.launchpad.net/swift/+spec/bexar-client-side-chunking">client-side
chunking</a>.
The only size limit for your objects is now the available size in your
Swift cluster ! You can read more about that exciting feature in <a href="http://programmerthoughts.com/programming/the-story-of-an-openstack-feature/">John
Dickinson's blog
post</a>.
We also hope to ship
<a href="https://blueprints.launchpad.net/swift/+spec/bexar-swauth">Swauth</a>,
DevAuth highly scalable replacement, directly into Swift codebase.
Exposure of most of the <a href="https://blueprints.launchpad.net/swift/+spec/bexar-s3api">S3 API in
Swift</a> may or
may not make it.</p>
<h3>Glance (OpenStack image registry and delivery service)</h3>
<p>The Glance image service will expose a <a href="https://blueprints.launchpad.net/glance/+spec/unified-api">unified REST
API</a> (no more
distinction between the image registry and the image delivery services).
We will also have the possibility to upload image data and metadata over
<a href="https://blueprints.launchpad.net/glance/+spec/api-add-image">one single
call</a>.
Unified <a href="https://blueprints.launchpad.net/glance/+spec/clients">client
classes</a> will be
shipped directly in Glance. We also hope to have a <a href="https://blueprints.launchpad.net/glance/+spec/teller-s3-backend">S3
backend</a>...</p>
<h3>Nova (OpenStack compute)</h3>
<p>There is so much coming up in Nova it's hard to summarize. Nova will
<a href="https://blueprints.launchpad.net/nova/+spec/image-service-use-glance-clients">make use of those new Glance client
classes</a>,
obviously. We will support booting VMs from <a href="https://blueprints.launchpad.net/nova/+spec/raw-disk-images">raw disk
images</a>
(rather than a kernel/ramdisk/image combination) and have a <a href="https://blueprints.launchpad.net/nova/+spec/rescue-mode">rescue
mode</a> to mount
your faulty disks under a sane environment. We plan to have
<a href="https://blueprints.launchpad.net/nova/+spec/xs-snapshots">instance</a>
<a href="https://blueprints.launchpad.net/nova/+spec/snapshot-instance">snapshots</a>
ready. API servers can now
<a href="https://blueprints.launchpad.net/nova/+spec/admin-only-api">expose</a>
optional admin features (through the --allow_admin_api flag), like a
specific XenServer instance
<a href="https://blueprints.launchpad.net/nova/+spec/xs-pause">pause</a> or
<a href="https://blueprints.launchpad.net/nova/+spec/xs-suspend">suspend</a>
feature.</p>
<p>Lots of improvements might go unnoticed, like the
<a href="https://blueprints.launchpad.net/nova/+spec/i18n-support">internationalization</a>
of messages, the standardization on services using eventlet, more robust
<a href="https://blueprints.launchpad.net/nova/+spec/audit-logging">logging</a>,
or the move of the IP allocation <a href="https://blueprints.launchpad.net/nova/+spec/move-ip-allocation">down the
stack</a>.
We'll also finalize some incomplete features, like access to your
<a href="https://blueprints.launchpad.net/nova/+spec/project-vpn">project VLAN through a
VPN</a>, <a href="https://blueprints.launchpad.net/nova/+spec/bexar-iptables-security-groups">security
groups</a>
that work in all network modes, and
<a href="https://blueprints.launchpad.net/nova/+spec/austin-microsoft-hyper-v-support">Hyper-V</a>
support.</p>
<p>We hope to have much more: a <a href="https://blueprints.launchpad.net/nova/+spec/web-based-serial-console">web-based serial
console</a>
to access your VMs,
<a href="https://blueprints.launchpad.net/nova/+spec/ipv6-support">ipv6</a>
support, the possibility to deploy hardware in a <a href="https://blueprints.launchpad.net/nova/+spec/hardware-staging">staging
area</a> of
your cloud, support for highly available block volumes through
<a href="https://blueprints.launchpad.net/nova/+spec/sheepdog-support">Sheepdog</a>,
instance
<a href="https://blueprints.launchpad.net/nova/+spec/diagnostics-per-instance">diagnostics</a>
allowing to retrieve a history of actions on instances, the possibility
to do <a href="https://blueprints.launchpad.net/nova/+spec/bexar-migration-live">live
migration</a>
in nova-manage, <a href="https://blueprints.launchpad.net/nova/+spec/bexar-iscsi-support-xenapi">iSCSI
support</a>
for XenAPI... But let's be realistic, not everything will land in time.
What doesn't make it will certainly be in the next release, Cactus,
which will be released in April !</p>
<p>Congrats to our awesome development team for making all this possible.
Those last two months have been a very fun ride for me :)</p>The importance of shared understandings2010-12-06T08:34:00+01:002010-12-06T08:34:00+01:00Thierry Carreztag:ttx.re,2010-12-06:/the-importance-of-shared-understandings.html<p>In his book <em>Where in the world is my team</em>, Terence Brake outlines the
three challenges that global and virtual teams face. There is
<em>Isolation</em> (reduced contacts and difficulty of trust building can
easily make you feel alone and lose motivation), <em>Fragmentation</em>
(unclear purpose and fuzzy responsabilities fragment your effort …</p><p>In his book <em>Where in the world is my team</em>, Terence Brake outlines the
three challenges that global and virtual teams face. There is
<em>Isolation</em> (reduced contacts and difficulty of trust building can
easily make you feel alone and lose motivation), <em>Fragmentation</em>
(unclear purpose and fuzzy responsabilities fragment your effort and
make you inefficient) and <em>Confusion</em> (too much or too little
information makes you take the wrong path, and hidden activities prevent
anyone from noticing).</p>
<p>As virtual and global aggregations of common interests, community-driven
open source projects are specifically vulnerable to those challenges.
Fortunately, Brake also explains how to fight these. One of the areas he
identified is convergence, through shared understandings, as a way to
generate clarity and fight confusion (and, to a lesser extent,
fragmentation).</p>
<p>Without shared understandings, the lack of shared context, the
conflicting assumptions, and the distance between team members can
generate a lot of confusion. This confusion results in the loss of
everyone's time in the best case, in team implosion in the worst. With
shared understandings, you get natural convergence and you reduce the
need for interrupt-driven communication.</p>
<p>That's why we, the OpenStack team, need documentation. Not only
user-oriented and developer-oriented documentation, but also project
documentation. The team already has a clear
<a href="http://wiki.openstack.org/StartingPage">mission</a>. Meeting face-to-face
during our design summits allows us to get to know each other, which is
critical to bootstrap common context. We need to ensure we have team
principles (we have <a href="http://wiki.openstack.org/BasicDesignTenets">design
tenets</a> and <a href="http://wiki.openstack.org/CodingStandards">coding
standards</a>, we need a code of
conduct), clear priorities, open implementation plans, shared
performance indicators... This won't happen in a day.</p>
<p>Since I joined the project, I tried to produce a basic set of reference
documents which should help generating clarity. So far I concentrated on
our release cycle and our Launchpad tools:</p>
<ul>
<li><a href="http://wiki.openstack.org/BexarReleaseSchedule">Release Schedule</a></li>
<li><a href="http://wiki.openstack.org/ReleaseCycle">Release Cycle</a></li>
<li><a href="http://wiki.openstack.org/BugsLifecycle">Lifecycle of Bugs</a></li>
<li><a href="http://wiki.openstack.org/BlueprintsLifecycle">Lifecycle of
Blueprints</a></li>
</ul>
<p>Sometimes it looks like those project documentation pages are spelling
the obvious, but that's the price to pay to make sure everyone doesn't
have different assumptions. Those wiki pages are very much open for
discussion and evolution: the goal is not to force anyone into new
workflows or extra bureaucracy. The goal is to have a clear reference
point when you need more clarity. I hope it will help avoiding confusion
by establishing shared understandings.</p>My desktop backup solution2010-11-29T17:24:00+01:002010-11-29T17:24:00+01:00Thierry Carreztag:ttx.re,2010-11-29:/my-desktop-backup-solution.html<p>I was inspired by a good <a href="http://www.piware.de/2009/11/my-desktop-backup-solution/">blogpost by Martin
Pitt</a> to setup
my own desktop backup solution. I liked the idea of not requiring the
computer to be on all the time, and having the backup pushed from the
client rather than pulled from the server. However, my needs were …</p><p>I was inspired by a good <a href="http://www.piware.de/2009/11/my-desktop-backup-solution/">blogpost by Martin
Pitt</a> to setup
my own desktop backup solution. I liked the idea of not requiring the
computer to be on all the time, and having the backup pushed from the
client rather than pulled from the server. However, my needs were
slightly different from his, so I adapted it.</p>
<p>His solution uses rsnapshot locally, then pushes the resulting
directories to a remote server. I didn't want to use local disk space
(SSD ain't cheap), but I had a local server with 2Tb available. So in my
solution, the client rsyncs to the server, then the server triggers
rsnapshot locally if the rsync was successful. This is done over SSH and
the server has no right whatsoever on the client.</p>
<h3>Prerequisites</h3>
<p>In the examples the client to back up will be called <em>mycli</em> and the
server on which the backup will live is named <em>mysrv</em>. As a
prerequisite, mycli will need rsync and openssh-client installed. mysrv
will need rsnapshot and openssh-server installed. OpenSSH needs to have
public-key authentication enabled.</p>
<h3>SSH setup</h3>
<p><span style="text-decoration:underline;">On the client side</span>,
generate a specific passwordless SSH key for the backup connection:</p>
<div class="highlight"><pre><span></span><code>mkdir ~/.backup
ssh-keygen -f ~/.backup/id_backup
</code></pre></div>
<p><span style="text-decoration:underline;">On the server side</span>,
we'll assume you want to put backups into /srv/backup. First of all,
create an rbackup user that will be used to run the backup serverside:</p>
<div class="highlight"><pre><span></span><code>sudo mkdir /srv/backup
sudo adduser --home /srv/backup --no-create-home --disabled-password rbackup
</code></pre></div>
<p>Next, add your backup public key (the contents of mycli:\~/
.backup/id_backup.pub) to mysrv:/srv/backup/.ssh/authorized_keys. The
trick is to prefix it (same line, one space separator) with the only
command you want the rbackup user to perform via that SSH connection:</p>
<div class="highlight"><pre><span></span><code><span class="n">command</span><span class="o">=</span><span class="ss">"rsync --config /srv/backup/rsyncd-mycli.conf --server</span>
<span class="ss">--daemon ."</span><span class="w"> </span><span class="n">ssh</span><span class="o">-</span><span class="n">rsa</span><span class="w"> </span><span class="n">AAAAB3NzaLwm0ckRdzotb3</span><span class="p">..</span><span class="mf">.5</span><span class="n">Mbiw</span><span class="o">==</span><span class="w"> </span><span class="n">ttx</span><span class="nv">@mycli</span>
</code></pre></div>
<p>Finally, you need to let rbackup read those .ssh files:</p>
<div class="highlight"><pre><span></span><code>sudo chgrp -R rbackup /srv/backup/.ssh
sudo chmod -R g+r /srv/backup/.ssh
</code></pre></div>
<h3>rsync setup (server-side)</h3>
<p>Now we need to set up the rsync configuration that will be used on those
connections:</p>
<div class="highlight"><pre><span></span><code><span class="err">#</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">rsyncd</span><span class="o">-</span><span class="n">mycli</span><span class="p">.</span><span class="n">conf</span>
<span class="nf">max</span><span class="w"> </span><span class="n">connections</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">1</span>
<span class="n">lock</span><span class="w"> </span><span class="k">file</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">mycli</span><span class="o">/</span><span class="n">rsync</span><span class="p">.</span><span class="n">lock</span>
<span class="nf">log</span><span class="w"> </span><span class="k">file</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">mycli</span><span class="o">/</span><span class="n">rsync</span><span class="p">.</span><span class="nf">log</span>
<span class="k">use</span><span class="w"> </span><span class="n">chroot</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">false</span>
<span class="nf">max</span><span class="w"> </span><span class="n">verbosity</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">3</span>
<span class="k">read</span><span class="w"> </span><span class="k">only</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">false</span>
<span class="k">write</span><span class="w"> </span><span class="k">only</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">true</span>
<span class="o">[</span><span class="n">mycli</span><span class="o">]</span>
<span class="w"> </span><span class="k">path</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">mycli</span><span class="o">/</span><span class="n">incoming</span>
<span class="w"> </span><span class="n">post</span><span class="o">-</span><span class="n">xfer</span><span class="w"> </span><span class="k">exec</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">kick</span><span class="o">-</span><span class="n">rsnapshot</span><span class="w"> </span><span class="o">/</span><span class="n">srv</span><span class="o">/</span><span class="k">backup</span><span class="o">/</span><span class="n">mycli</span><span class="o">/</span><span class="n">rsnapshot</span><span class="p">.</span><span class="n">conf</span>
</code></pre></div>
<p>The <em>post-xfer</em> exec command is executed on successful transfers to
/srv/backup/client/incoming. In our case, we want rsync to trigger the
/srv/backup/kick-rsnapshot script:</p>
<div class="highlight"><table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre><span class="normal">1</span>
<span class="normal">2</span>
<span class="normal">3</span>
<span class="normal">4</span></pre></div></td><td class="code"><div><pre><span></span><code><span class="ch">#!/bin/bash</span>
<span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="s2">"</span><span class="nv">$RSYNC_EXIT_STATUS</span><span class="s2">"</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="s2">"0"</span><span class="w"> </span><span class="o">]</span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span>rsnapshot<span class="w"> </span>-c<span class="w"> </span><span class="nv">$1</span><span class="w"> </span>daily
<span class="k">fi</span>
</code></pre></div></td></tr></table></div>
<p>Don't forget to make that one executable :)</p>
<h3>rsnapshot setup (server-side)</h3>
<p>rsnapshot itself is configured in the /srv/backup/mycli/rsnapshot.conf
file. This is where you specify how many pseudo-weekly copies you want
to keep (read rsnapshot documentation to understand the <em>interval</em>
concept):</p>
<div class="highlight"><pre><span></span><code><span class="gh">#</span> /srv/backup/mycli/rsnapshot.conf
config_version 1.2
snapshot_root /srv/backup/mycli
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_logger /usr/bin/logger
interval daily 6
interval weekly 6
verbose 2
loglevel 3
lockfile /srv/backup/mycli/rsnapshot.pid
rsync_long_args --delete --numeric-ids --delete-excluded
link_dest 1
backup /srv/backup/mycli/incoming/ ./
</code></pre></div>
<p>Now you just have to create the backup directory hierarchy with
appropriate permissions:</p>
<div class="highlight"><pre><span></span><code>mkdir -p /srv/backup/mycli/incoming
chown -R rbackup:rbackup /srv/backup/mycli
</code></pre></div>
<h3>The backup (client-side)</h3>
<p>The client will rsync periodically to the server, using the following
script:</p>
<div class="highlight"><table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre><span class="normal"> 1</span>
<span class="normal"> 2</span>
<span class="normal"> 3</span>
<span class="normal"> 4</span>
<span class="normal"> 5</span>
<span class="normal"> 6</span>
<span class="normal"> 7</span>
<span class="normal"> 8</span>
<span class="normal"> 9</span>
<span class="normal">10</span>
<span class="normal">11</span>
<span class="normal">12</span>
<span class="normal">13</span>
<span class="normal">14</span>
<span class="normal">15</span>
<span class="normal">16</span></pre></div></td><td class="code"><div><pre><span></span><code><span class="ch">#!/bin/bash</span>
<span class="nb">set</span><span class="w"> </span>-e
<span class="nv">TOUCHFILE</span><span class="o">=</span><span class="nv">$HOME</span>/.backup/last_backup
<span class="c1"># Check if last backup was more than a day before</span>
<span class="nv">now</span><span class="o">=</span><span class="sb">`</span>date<span class="w"> </span>+%s<span class="sb">`</span>
<span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span>-e<span class="w"> </span><span class="nv">$TOUCHFILE</span><span class="w"> </span><span class="o">]</span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nv">age</span><span class="o">=</span><span class="k">$((</span><span class="nv">$now</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="sb">`</span>stat<span class="w"> </span>-c<span class="w"> </span>%Y<span class="w"> </span><span class="nv">$TOUCHFILE</span><span class="sb">`</span><span class="k">))</span>
<span class="k">else</span>
<span class="w"> </span><span class="nb">unset</span><span class="w"> </span>age
<span class="k">fi</span>
<span class="o">[</span><span class="w"> </span>-n<span class="w"> </span><span class="s2">"</span><span class="nv">$age</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="nv">$age</span><span class="w"> </span>-lt<span class="w"> </span><span class="m">86300</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="nb">exit</span><span class="w"> </span><span class="m">0</span>
nice<span class="w"> </span>-n<span class="w"> </span><span class="m">10</span><span class="w"> </span>rsync<span class="w"> </span>-e<span class="w"> </span><span class="s2">"ssh -i </span><span class="nv">$HOME</span><span class="s2">/.backup/id_backup"</span><span class="w"> </span>-avzF<span class="w"> </span>
<span class="w"> </span>--delete<span class="w"> </span>--safe-links<span class="w"> </span><span class="nv">$HOME</span><span class="w"> </span>rbackup@mysrv::mycli
touch<span class="w"> </span><span class="nv">$TOUCHFILE</span>
</code></pre></div></td></tr></table></div>
<p>That script ensures that at most once per day, you will sync to the
server. You can run it (as your user) as often as you'd like (I suggest
hourly via cron). On successful syncs, the server will trigger rsnapshot
to do its magic backup rotation ! Using the same model, you can easily
set up multiple directories or multiple clients.</p>
<p>Like with Martin's solution, you should set up various <em>.rsync-filter</em>
files to exclude the directories and files you don't want copied to the
backup server.</p>
<p>The drawback of this approach is that the server keeps an extra copy of
your backup (in the incoming directory). But in my case, since the
server has plenty of space, I can afford it. It also does not work when
you are away from your backup server.</p>
<p>I hope you find that setup useful, it served me well so far.</p>The art of release management2010-11-08T21:37:00+01:002010-11-08T21:37:00+01:00Thierry Carreztag:ttx.re,2010-11-08:/the-art-of-release-management.html<p>Last week I started a new job, working for Rackspace Hosting as the
Release Manager for the Openstack project. I'm still very much working
from home on open source software, so that part doesn't change. However,
there are some subtle differences.</p>
<p>First of all, Openstack is what we call an …</p><p>Last week I started a new job, working for Rackspace Hosting as the
Release Manager for the Openstack project. I'm still very much working
from home on open source software, so that part doesn't change. However,
there are some subtle differences.</p>
<p>First of all, Openstack is what we call an upstream project. Most of my
open source work so far involves <em>distribution work</em>: packaging and
delivering various open source software components into a
well-integrated, tested and security-maintained distribution. This is
hard work, one that is never completely finished or perfect. It is also
a necessary part of the open source ecosystem: without distributions,
most software would not be easily available for use.</p>
<p><em>Upstream work</em>, on the other hand, is about developing the software in
the first place. It's a more creative work, in a much more controlled
environment. The Openstack project is the new kid on the block of cloud
computing software, one that strives to become the open source standard
for building cloud infrastructures everywhere. It was announced in July,
so it's relatively young. There are lots of procedures and processes to
put in place, an already-large developer group, and an ever-growing
community of users and partners. The software itself is positioned to
run in high-demand environments: The storage component is in production
use at Rackspace, the compute component is in production use at NASA.
Openstack is planned to fully replace the current Rackspace Cloud
software next year, and a number of governments plan to use it to power
their local cloud infrastructure. Those are exciting times.</p>
<p>What does an open source project Release Manager do ? Well first, as it
says on the tin, it manages the release process. Every 3 or 6 months,
Openstack will release a new version of its components, and someone has
to make sure that that happens. That's OK, but what do I do the other 50
weeks of the year ? Well, release managers also manage the release
<strong>cycle</strong>. A cycle goes through four stages: Design, Development, QA and
Release. It is the job of the release manager to drive and help the
developer community through those stages, follow work in progress,
making sure everyone knows about the steps and freezes, and granting
exceptions when necessary. At the very end, he must balance between the
importance of a bug and the risk of regression the bugfix introduces:
it's better to release with a known bug than with an unknown regression.
He is ultimately responsible for the delivery, on time, of the complete
release cycle. And yes, if you condense everything to 3 or 6 months,
this is a full-time job :)</p>
<p>My duties also include ensuring that the developers have everything they
need to work at their full potential and that the project is
transparent. I also have to make sure the developer community is a
welcoming environment for prospective new contributors, and present the
project as a technical envangelist in conferences. And if I still have
free time, I may even write some code where I need to scratch an itch.
All in all, it's a pretty exciting job, and I'm very happy to meet
everyone this week at the Openstack design summit in ~~Orlando~~ San
Antonio.</p>The real problem with Java in Linux distros2010-09-24T10:13:00+02:002010-09-24T10:13:00+02:00Thierry Carreztag:ttx.re,2010-09-24:/the-real-problem-with-java-in-linux-distros.html<p>Java is not a first-class citizen in Linux distributions. We generally
have decent coverage for Java libraries, but lots of Java software is
not packaged at all, or packaged in alternate repositories. Some
consider that it's because Linux distribution developers dislike Java
and prefer other languages, like C or Python …</p><p>Java is not a first-class citizen in Linux distributions. We generally
have decent coverage for Java libraries, but lots of Java software is
not packaged at all, or packaged in alternate repositories. Some
consider that it's because Linux distribution developers dislike Java
and prefer other languages, like C or Python. The reality is slightly
different.</p>
<h2>Java is fine</h2>
<p>There is nothing sufficiently wrong with Java that would cause it to
uniformly be a second-class citizen on every distro. It is a widely-used
language, especially in the corporate world. It has a vibrant open
source community. On servers, it generated very interesting stable
(Tomcat) and cutting-edge (Hadoop, Cassandra...) projects. So what
grudge do the distributions hold against Java ?</p>
<h2>Distributing distributions</h2>
<p>The problem is that Java open source upstream projects do not really
release code. Their main artifact is a complete binary distribution, a
bundle including their compiled code and a set of third-party libraries
they rely on. If you take the Java project point of view, it makes
sense: you pick versions of libraries that work for you, test that
precise combination, and release the same bundle for all platforms. It
makes it easy to use everywhere, especially on operating systems that
don't enjoy the greatness of an unified package management system.</p>
<p>That doesn't play well with how Linux distributions package software. We
want to avoid code duplication (so that a security update in a library
package benefits all software that uses it), so we package libraries
separately. We keep those up to date, to benefit from bugfixes and new
features. We consider libraries to be part of the platform provided by
the Linux distribution.</p>
<p>The Java upstream project consider libraries to be part of the software
bundle they release. So they keep the libraries at a precise version
they tested, and only update them when they really need to. Essentially,
they maintain their own platform of libraries. <strong>They do, at their
scale, the same work the Linux distributions do.</strong> And that's where the
real problem lies.</p>
<h2>Solutions ?</h2>
<h3>Force software to use your libraries</h3>
<p>For simple Java software, stripping the upstream distribution and
forcing it to use your platform libraries can work. But that creates
friction with upstream projects (since you introduce an untested
difference). And that doesn't work with more complex software: swapping
libraries below it will just make it fail.</p>
<h3>Package all versions of libraries</h3>
<p>The next obvious solution is to make separate packages for every version
of library that the software uses. The problem is that there is no real
convergence on "commonly-used" versions of libraries. There is no ABI
protection, nor general guidelines on versioning. You end up having to
package each and every minor version of a library that the software
happens to want. That doesn't scale well: it creates an explosion in the
number of packages, code duplication, security update nightmares, etc.
Furthermore, sometimes the Java project patches the libraries they ship
with to include a specific feature they need, so it doesn't even match
with a real library version anymore.</p>
<p><em>Note: The distribution that is the closest to implementing this
approach is Gentoo, through the SLOT system that lets you have several
versions of the same package installed at the same time.</em></p>
<h3>Bundle software with their libraries</h3>
<p>At that point, you accept code duplication, so just shipping the precise
libraries together with the software doesn't sound that bad of an idea.
Unfortunately it's not that simple. Linux distributions must build
everything from source code. In most cases, the upstream Java project
doesn't ship the source code used in the libraries it bundles. And what
about the source code of the build dependencies of your libraries ? In
some corner cases, the library project is even abandoned, and its source
code lost...</p>
<h2>What can we do to fix it ?</h2>
<p>So you could say that the biggest issue the Linux distributions have
with Java is not really about the language itself. It's about an
ecosystem that glorifies binary bundles and not source code. And there
is no easy solution around it, that's why you can often hear Java
packagers in Linux distributions explain how much they hate Java. That's
why there is only a minimal number of Java projects packaged in
distributions. Shall we abandon all hope ?</p>
<p>The utopia solution is to aim for a reference platform, reasonably
up-to-date libraries that are known to work well together, and encourage
all Java upstream developers to use that. That was one of JPackage's
goals, but it requires a lot more momentum to succeed. It's very
difficult, especially since Java developers often use Windows or OSX.</p>
<p>Another plan is to build a parallel distribution mechanism for Java
libraries inside your distro. A Java library wouldn't be shipped as a
package anymore. But I think unified package systems are the glory of
Linux distributions, so I don't really like that option.</p>
<h2>Other issues, for reference</h2>
<p>There are a few other issues I didn't mention in this article, to
concentrate on the "distributing distributions" aspect. The tarball
distributions don't play nice with the FHS, forcing you to play with
symlinks to try to keep both worlds happy (and generally making both
unhappy). Maven encourages projects to pick precise versions of
libraries and stick to them, often resulting in multiple different
versions of the same library being used in a given project. Java code
tends to build-depend on hundreds of obscure libraries, transforming
seemingly-simple packaging work into a man-year exponential effort.
Finally, the same dependency inflation issue makes it a non-trivial
engagement to contractually support all the dependencies (and build
dependencies) of a given software (like Canonical does for software in
the Ubuntu main repository).</p>The 6 dimensions of Open Source2010-09-08T13:49:00+02:002010-09-08T13:49:00+02:00Thierry Carreztag:ttx.re,2010-09-08:/the-6-dimensions-of-open-source.html<p>Why do people choose to participate in Open Source ? It's always a mix
of various reasons, so let's try to explore and classify them.</p>
<h2>Technical</h2>
<p>The first dimension is technical. People like open source because
looking directly in the code gives them the ability to <strong>understand</strong>
the behavior of their …</p><p>Why do people choose to participate in Open Source ? It's always a mix
of various reasons, so let's try to explore and classify them.</p>
<h2>Technical</h2>
<p>The first dimension is technical. People like open source because
looking directly in the code gives them the ability to <strong>understand</strong>
the behavior of their software. No documentation can match that level of
precision. They also like the ability to <strong>fix</strong> it themselves when it's
broken, rather than relying on usually-broken support contracts. Any
non-Fortune500 that tried to report a bug to Microsoft and get it fixed
will probably get my point. Sometimes, they like the ability to
<strong>shape</strong> and influence the future of the software, when that software
uses open design mechanisms (like Ubuntu with its free and
open-to-anyone Development Summits). Finally, they may be convinced,
like I am, that open source software development methods result in
better code <strong>quality</strong>.</p>
<h2>Political</h2>
<p>Next to the technical dimension, we have a political dimension, more
precisely a <em>techno-political</em> dimension. People like Free software as a
way to preserve end-user <strong>freedom</strong>, <strong>privacy</strong> and <strong>control</strong> over
technology. Some powerful companies will use every trick in the book to
reduce your rights and increase their revenue, so its more and more
important that we are aware of those issues and fight back. Working on
free and open source software is a way to contribute to that effort.</p>
<h2>Philosophical</h2>
<p>Very close to the political dimension, we are now seeing philosophic
interest in open source software. The 20th century saw the creation of a
consumer class with a new divide between those who produce and those who
consume. This dissociated usage of technology is a self-destroying
model, and contributing models (or participative production models) are
considered to be the solution to fix our societies for the future. Be a
producer and a consumer at the same time and be <strong>associated</strong> with
technology rather than alienated by it. Open source is an early and
highly successful manifestation of that.</p>
<h2>Economical</h2>
<p>Back on the ground, there are strong and rational economic reasons for
companies to opt to fund open source development. From most virtuous to
less, we first find companies <strong>using</strong> the technology internally rather
than selling it : sharing development and maintenance costs among
several users of that same technology makes great sense, and makes very
virtuous open source communities. Next you find companies selling
<strong>services</strong> around open source software: being the main sponsor of a
project gives you a unique position to leverage your know-how around
software that is freely available. Next you find <strong>open core</strong>
approaches, from companies making a business selling proprietary add-ons
to those using open source as crippleware. Finally, at the bottom,
you'll find companies using "open source" or "community" as a venture
capitalist <strong>honeypot</strong>. They don't believe in it, they resist
implementing what it takes to do it, but they like the money that
pretending to do open source will bring them.</p>
<h2>Social</h2>
<p>A very important dimension of open source is the social dimension. Many
people join open source projects to <strong>belong</strong> to a cool community that
allows you to prove yourself, gain mastery and climb the ladder of a
meritocracy. If your community doesn't encourage and reward those that
are in this social dimension, you'll miss a huge chunk of potential
contributors. Another social aspect is that doing work in the open (and
in all transparency) is also great publicity for your skills and to get
<strong>employment</strong>. The main reason I got hired by Canonical was due to my
visible work on Gentoo's Security team, much more than to the rest of my
professional experience. Finally, the sheer <strong>ego-flattering</strong> sensation
you get by knowing that millions of people are using your work is
definitely a powerful drive.</p>
<h2>Ethical</h2>
<p>The last dimension is ethical: the idea of directly contributing to the
sum of the world's common knowledge is appealing. Working on open source
software, you just make the world a better place. For example, open
source helps third-world and developing countries to reduce their
external debt, by encouraging the creation of local service companies
rather than encouraging to buy licenses to US companies. That sense of
<strong>purpose</strong> is what drives a lot of people (including me) to work on
open source.</p>
<p>Did I miss anything ? What drives you to participate on open source ?
Please let me know, by leaving a comment !</p>Why Open Core is wrong2010-07-08T19:53:00+02:002010-07-08T19:53:00+02:00Thierry Carreztag:ttx.re,2010-07-08:/why-open-core-is-wrong.html<p>Open core is a business model where the base version of a software would
be released as open source while some advanced features would be closed
source. It's been under a <a href="http://blog.sysdroid.com/2010/06/the-open-core-debate/">lot of
discussion</a>
lately, so I'll just add my 2 cents...</p>
<p>Outside the obvious workaround of the free software …</p><p>Open core is a business model where the base version of a software would
be released as open source while some advanced features would be closed
source. It's been under a <a href="http://blog.sysdroid.com/2010/06/the-open-core-debate/">lot of
discussion</a>
lately, so I'll just add my 2 cents...</p>
<p>Outside the obvious workaround of the free software principles, there
are <a href="http://www.computerworlduk.com/community/blogs/index.cfm?entryid=3047&blogid=41">well-known
issues</a>
with this model. In particular, it is difficult to set the right limit
between the "community edition" and the "enterprise edition", and you
end up having to refuse legitimate patches that happen to be a feature
in your enterprise edition roadmap. So building a real open source
community on top of the Open Core model can be quite a challenge. But
the main reason why I think it's wrong is purely technical.</p>
<p>I am a perfectionist. I work on open source software because I truly
believe that the open source development methodology ends up<em>creating
better code</em>. Having all your code out there, up for scrutiny and
criticism, makes you think twice before committing something half-baked.
Allowing everyone to scratch their own itch ensures top motivation of
contributors and quick advancement of new features. And I could go on
and on...</p>
<p>Open Core denies that the open source development model creates better
code. Open Core basically screams: for the basics we use open source,
but for the most advanced features, the enterprise-quality ones, closed
source is at least as good. You end up alienating a potential community
of developers for the benefit of writing closed source code of lesser
quality. You end up using open source just as a VC honeypot.</p>
<p>Open Core advocates
<a href="http://www.computerworlduk.com/community/blogs/index.cfm?entryid=3048&blogid=41">say</a>
that open source software companies need some unfair advantage to
monetize their efforts, and justify Open Core based on that. I'd argue
that selling expertise on a awesome piece of software is a better
business model. It's true it's a longer road to become rich, but I still
think it's the right one.</p>GPG key transition2010-06-03T12:27:00+02:002010-06-03T12:27:00+02:00Thierry Carreztag:ttx.re,2010-06-03:/gpg-key-transition.html<p>I've recently set up a stronger (4096R) OpenPGP key, and will be
transitioning away from my old (1024D) one. The old key will continue to
be valid for some time, but i prefer all future correspondence to come
to the new one. I would also like this new key to …</p><p>I've recently set up a stronger (4096R) OpenPGP key, and will be
transitioning away from my old (1024D) one. The old key will continue to
be valid for some time, but i prefer all future correspondence to come
to the new one. I would also like this new key to be re-integrated into
the web of trust. Please find here a <a href="http://people.ubuntu.com/~ttx/gpg_transition.txt">statement signed both
keys</a>, certifying the
transition.</p>
<p>The old key was:</p>
<div class="highlight"><pre><span></span><code>pub 1024D/B6A55F4F 2004-04-01
Key fingerprint = 67FE 2899 7E9D 9D03 F1E7 C8BB BDC2 F5A1 B6A5 5F4F
</code></pre></div>
<p>And the new key is:</p>
<div class="highlight"><pre><span></span><code>pub 4096R/25B10423 2010-05-25
Key fingerprint = 22A7 9430 50DB 1E67 EC2B 641A 507A F890 25B1 0423
</code></pre></div>
<p>To fetch my new key from a public key server, you can simply do:</p>
<div class="highlight"><pre><span></span><code> gpg --keyserver pgp.mit.edu --recv-key 25B10423
</code></pre></div>
<p>If you already know my old key, you can now verify that the new key is
signed by the old one:</p>
<div class="highlight"><pre><span></span><code> gpg --check-sigs 25B10423
</code></pre></div>
<p>If you don't already know my old key, or you just want to be double
extra paranoid, you can check the fingerprint against the one above:</p>
<div class="highlight"><pre><span></span><code> gpg --fingerprint 25B10423
</code></pre></div>
<p>If you are satisfied that you've got the right key, and the UIDs match
what you expect, I'd appreciate it if you would sign my key:</p>
<div class="highlight"><pre><span></span><code> gpg --sign-key 25B10423
</code></pre></div>
<p>Lastly, if you could upload these signatures, I would appreciate it. You
can either send me an e-mail with the new signatures or you can just
upload the signatures to a public keyserver directly:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="nv">gpg</span><span class="w"> </span><span class="o">--</span><span class="nv">keyserver</span><span class="w"> </span><span class="nv">pgp</span>.<span class="nv">mit</span>.<span class="nv">edu</span><span class="w"> </span><span class="o">--</span><span class="k">send</span><span class="o">-</span><span class="nv">key</span><span class="w"> </span><span class="mi">25</span><span class="nv">B10423</span>
</code></pre></div>
<p>Thanks !</p>GTD with RTM2010-01-25T19:51:00+01:002010-01-25T19:51:00+01:00Thierry Carreztag:ttx.re,2010-01-25:/gtd-with-rtm.html<p>Following my colleague and friend
<a href="http://ubuntumathiaz.wordpress.com/">Mathias</a>'s advice, I've been using
GTD (Getting Things Done) to keep myself organized for some time now. A
recurrent question is "what software are you using ?". I tried several
programs, but nothing could quite fit my system and decentralized use.</p>
<p>Lots of folks are …</p><p>Following my colleague and friend
<a href="http://ubuntumathiaz.wordpress.com/">Mathias</a>'s advice, I've been using
GTD (Getting Things Done) to keep myself organized for some time now. A
recurrent question is "what software are you using ?". I tried several
programs, but nothing could quite fit my system and decentralized use.</p>
<p>Lots of folks are now pushing GTG (Getting Things Gnome). While I see a
lot of potential in GTG, it's still a task manager (everything is a
task) rather than a flexible list manager. GTD uses lists of things that
are specifically <em>not</em> tasks (the inbox, the maybe lists, the project
list...).</p>
<p>Mathias recommended using <a href="http://www.rememberthemilk.com">Remember the
Milk</a> (RTM), a highly flexible web
service with lots of APIs (and more). I originally set up something
along the lines of this <a href="http://blog.rememberthemilk.com/2008/05/guest-post-advanced-gtd-with-remember-the-milk/">reference
post</a>,
but it failed for me in several areas:</p>
<ul>
<li>Parsing Inbox was painful (no shortcut key to move tasks to other
lists)</li>
<li>No "tickler file" approach allowing you to forget about an item for
some time</li>
<li>My projects are using work items in Ubuntu blueprints, keeping them
in sync was also painful</li>
</ul>
<p>So I changed it, here is my new setup:</p>
<ul>
<li>New items are created in the "Inbox", without tags.</li>
<li>A @ToProcess smartlist, using "list:Inbox and (isTagged:false or
(tag:hide and dueBefore:tomorrow))", contains the stuff I need to
parse during next Process phase</li>
<li>Process phase: for each item in @ToProcess:<ul>
<li>If it's actionable and takes less than 2 minutes, do it, mark it
as completed (\<c> shortcut)</li>
<li>If it's actionable but needs more time, use \<s> shortcut to
tag it with appropriate context ("me" if only me is required)</li>
<li>If you don't want to process it now, but want to file it in your
tickler file for it to reappear in two weeks: use \<d> "two
weeks" to set a Due Date, then use \<s> and tag it "hide"</li>
<li>Delegate tasks by using \<s> and tag it "wait" + some context
of who you're delegating to</li>
<li>As soon as it's tagged, the item disappears from the @ToProcess
list, which is good !</li>
<li>If it needs to go to one of the Maybe lists, move it there</li>
</ul>
</li>
<li>My @NextActions smartlist uses "isTagged:true and not (tag:wait or
tag:hide)"</li>
<li>My @WaitingFor smartlist just uses "tag:wait"</li>
</ul>
<p>I don't maintain anymore "one list per project", which was painful to
me. I just use a "Projects" list that is a regular GTD Projects list I
use during weekly reviews. I use multiple "Maybe" lists (one for ideas
needing incubating, one for technologies to look at, one for blog
article ideas, etc.).</p>
<p>A few remarks:</p>
<ul>
<li>I use Google Calendar for actions occurring at a specific time</li>
<li>I use the priority shortcuts to give a sense of urgency that helps
me quickly pick the right next action from the @NextActions list</li>
<li>I use context tags for everyone: for example, I mark "jib" all tasks
that require jib to be completed. When I talk to that person, I use
the RTM tag cloud to quickly bring up a "tag:jib" search to get a
list of all subjects I need him for, but also a reminder of tasks I
delegated to him.</li>
<li>I try to have my inbox at hand all the time, to be able to quickly
drop there a quick idea that crosses my mind. I use RTM google
calendar plugin, RTM netvibes module and also coded a "rtm" tool
using their python API, for direct use when I'm hacking in a
terminal. All create items in the default list (Inbox) and without
tagging, so it just works.</li>
<li>I also use an ActivityReport smartlist (completedWithin:"1 week of
today")</li>
</ul>
<p>Hope it helps :)</p>On burnout and technical management2009-07-31T20:41:00+02:002009-07-31T20:41:00+02:00Thierry Carreztag:ttx.re,2009-07-31:/burnout-and-technical-management.html<p>Jono <a href="http://www.jonobacon.org/2009/07/29/burnout-presentation-slides/">posted
recently</a>
the <a href="http://jonobacon.org/files/jonobacon-burnouttalk.pdf">slidedeck</a>
for his famous <em>12 stages of burnout</em> presentation. I highly recommend
this presentation, especially to technical teams working from home.</p>
<p>I think we are especially vulnerable to burnout, with limited social
interactions and sporadic discussions with our peers and managers. It's
quite easy to fall …</p><p>Jono <a href="http://www.jonobacon.org/2009/07/29/burnout-presentation-slides/">posted
recently</a>
the <a href="http://jonobacon.org/files/jonobacon-burnouttalk.pdf">slidedeck</a>
for his famous <em>12 stages of burnout</em> presentation. I highly recommend
this presentation, especially to technical teams working from home.</p>
<p>I think we are especially vulnerable to burnout, with limited social
interactions and sporadic discussions with our peers and managers. It's
quite easy to fall into the trap of the first two stages, <em>trying to
prove yourself</em> and <em>work harder</em>. And from there we are vulnerable to
falling into the spiral of the next ten stages.</p>
<p>This highlights one important role of managers of technical teams: to
protect ourselves from this outcome. You shouldn't have to prove
yourself if your manager makes you confident you're in the right place
and you earned your position. You shouldn't have to work harder if your
work output is closely monitored and realistic goals have been set for
you.</p>
<p>Technical managers have lots of duties. They must build their team,
define objectives, ensure that goals are reached, protect their team
from vertical and horizontal organizational hazards... But keeping their
team in shape is one of their most important duties, and detecting and
avoiding burnout in their team is an important part of it.</p>
<p>Your family and your peers can watch your back and help you recover from
it. But a good manager should save you from it.</p>Distributions, or why Universe matters2009-02-02T10:20:00+01:002009-02-02T10:20:00+01:00Thierry Carreztag:ttx.re,2009-02-02:/why-universe-matters.html<p>Most of us know what makes open source software better than their
proprietary counterparts. However I would like to stress one purely
technical advantage of Linux distributions when compared to their
proprietary alternatives.</p>
<p>It's the concept of distribution. Making software for your platform
available from a central repository, with installation …</p><p>Most of us know what makes open source software better than their
proprietary counterparts. However I would like to stress one purely
technical advantage of Linux distributions when compared to their
proprietary alternatives.</p>
<p>It's the concept of distribution. Making software for your platform
available from a central repository, with installation, upgrades,
security updates and removals all done by the same set integrated tools
and processes.</p>
<p>Having been forced to use Windows professionally in the last years, I
have been reminded of how great that is. Long-time distro users tend to
forget. The whole process of hunting down software, selecting something
that is less likely to contain spyware, downloading, installing... it's
so complex and boring. And then, you have to follow each product
security advisories to try to stay up-to-date security-wise. And then,
all those separate auto-update services run in the background. And then,
when you try to remove the product with its specific uninstaller, you
realize there is not so much incentive for software publishers to allow
you to completely get rid of them.</p>
<p>So distributions, by making selected software simply available and
upgradeable, are invaluable. The corollary is that you need to have
enough software available through your distribution that people don't
have to manually install stuff, otherwise you're back at step one. I
remember switching from the old RedHat to another distro because I
wanted exim and they forced you to run... sendmail.</p>
<p>That brings us to my second point: why Ubuntu Universe matters so much.
A distribution is a lot less interesting if you can't find what you're
looking for it its repositories. Thanks to its strong Debian roots,
Ubuntu inherits from the largest package base. But we also need to
ensure that those packages are working properly, are easy to deploy and
integrate well with the rest of the distro.</p>
<p>Having recently been accepted as a
<a href="http://wiki.ubuntu.com/MOTU" title="MOTU">MOTU</a>, I'm proud to contribute
wherever I can to this goal. The package wealth is the core strength of
a distribution, and this is why taking care of the Universe matters so
much.</p>