Ansible + OpenStack: No Panda-monium here!

Screen Shot 2016-06-19 at 1.38.22 PM
If, like myself, you are a frequent reader of technology-related articles on the internet, you may have seen this article come across your radar last Friday afternoon. Especially if the panda was as eyecatching for you as it was for myself.

Written by the nice folks over at The New Stack, the article speculates that one of the reasons for Ansible’s popularity in the world of containers is, quite simply, the popularity of Ansible in general. Citing results from the most recent OpenStack user survey, the author states:

Ansible made up a 14-point deficit compared to six months ago to become virtually tied with Puppet as the leading way to deploy/configure OpenStack clusters. Why this is the case, we’re not sure. However, one reason may be that Ansible’s online community is particularly strong.

checklistpanda-300pxNot sure why Ansible is growing so fast in popularity with OpenStack users? This must be why the Panda is so puzzled. Right?

Let me clear that up with a list of answers.

From a panda. (Incidentally, credit to Máirín Duffy, who actually created both of these pandas in her design work for the Fedora Project community. Super coincidence!)

Simplicity

Simplicity, ease of use, and a low learning curve are characteristics frequently cited as reasons folks have chosen Ansible — and in the ever-so-slightly (:D) complex world of OpenStack, the ability to abstract away some of that complexity gives users (admins, cloud consulevarmers, I hate the word “users”…) more time to work on other things.

Well. You know what LeVar Burton would say here.

Naveen Joy of Cisco elaborated on this particular point in his talk at the OpenStack Summit earlier this year in Austin, Texas, titled “Kubernetes Automation on OpenStack Using Ansible,” before following up with the long list of benefits.

“We did not want complexity in our automation. Kubernetes and OpenStack are complex for a reason;  they provide a lot of functions and a lot of knobs that you can customize and tune, APIs, a number of projects, variables that you can customize. But automation framework — we wanted it to be simple. We didn’t want to learn another manifest language to get this going. So, Ansible, we looked at it, and said, here’s the way to go…”

That said — making the deployment and management of an OpenStack cloud a little bit easier isn’t the only way in which simplicity matters. The simplicity of using Ansible with OpenStack, or really, Ansible with *anything*, also endears it to many folks — including Major Hayden of Rackspace, who explained further during his session in Austin, “Automated Security Hardening with OpenStack-Ansible.”

“The reason we love it is because there’s no agent required – there’s nothing actually that you have to install on the server to use Ansible with it. So you can deploy from your laptop, you can deploy from a bastion server, you can deploy from Jenkins, you can deploy from wherever you like. But you don’t have to go in on the other end and install ruby, and install 56 other things — you just have to make sure the other end has Python, which, if you’re running OpenStack… I certainly hope you have that installed, otherwise your installation is going to go in a bad direction.”

Versatility

Want to know one of my great “cloud” pet peeves? It’s that people say “users” all the time, and “users” can imply all types of different audiences to different folks. The operators? The sysadmins? The actual consumer of the cloud who just wants to do things on it? Who knows who we’re talking about!

What I do know is this: Users of OpenStack clouds — at any of those levels — are able to use Ansible. It’s useful to all of those groups. And, possibly even more importantly: this usefulness applies to Ansible in use cases everywhere. And in a lot of cases — people aren’t making a move to Ansible because it’s awesome with OpenStack. They’re coming as “users” of new OpenStack deployments and Ansible is *already what they’re using*, because it is so versatile.

Spencer Smith from Solinea explains their reasoning, as part of the consulting side of their company, for using Ansible in deploying Kubernetes, in this clip of their talk from OpenStack Summit.

“And I should say that we chose Ansible mainly because our clients are already using Ansible. They’ve developed expertise internally — so we didn’t want to compromise that, but still wanted to give them a way to deploy Kubernetes.”

Community

Let’s take a look at this snippet from the article: “Why this is the case, we’re not sure. However, one reason may be that Ansible’s online community is particularly strong.”

Well. I suppose it would be pretty easy for us to pat ourselves on the back and say, “Yep! More than 17 thousand stars on GitHub! Over 1400 people in #ansible on IRC! Over 2200 unique contributors to Ansible and its modules! Nearly 180 meetups with 35 thousand members around the world!” — and throw in a quaint mic drop gif, and walk away.

But the truth is: the combination of Ansible and OpenStack isn’t awesome, and thus, growing in use, because of the strength of the Ansible community. It is because of the work and collaboration and self-promotion of the group of folks who care about Ansible AND OpenStack — even without a formalized, central hub.

Here’s just a quick list of a few of the pockets of OpenStack + Ansible work:

(Note: I am super lazy at linking. But lucky for you, I have links to nearly all the projects listed below captured right here, which you can reference from this day forward!)

  • OpenStack “Big Tent” projects using Ansible: Kolla (OpenStack services in Docker containers). Openstack-ansible (OpenStack in LXC containers). Bifrost (Ansible + Ironic for bare metal). Openstack-ansible-security (automated deployment of security enhancements, based on the Security Technical Implementation Guide from the United States government).
  • OpenStack & Ansible users sharing their magic outside of the big tent: Ursula. Folks at Cisco. HP’s Helion. The fine humans at Catalyst Cloud (who, by the way, really get open source).  And plenty of other awesome examples by individuals and companies and projects are out there. 
  • OpenStack community members contribute and maintain OpenStack-related code inside of Ansible. Ansible’s OpenStack modules (more than 30 of them!) allow users / operators / consumers of OpenStack clouds to perform various functions on or in or to their cloud. These modules are literally used every single day by OpenStack’s infrastructure team to utilize OpenStack’s cloud, which, unsurprisingly, is an OpenStack cloud; also unsurprisingly, the maintainers for these modules in Ansible are largely members of OpenStack’s infrastructure team.
  • OpenStack is built with help from Ansible. OpenStack’s code is continuously tested and integrated prior to any commit — ensuring that small bits of bad code aren’t breaking life for the developers in a gigantic, incredibly busy project.  How do they do it? With the help of Zuul, an automation system built by OpenStack’s infrastructure team, for the OpenStack community, and used by more than 50 other open source communities as well. How does Zuul do this magic? It used to be with Jenkins. It’s now performed with Ansible

The truth is — none of these, alone, are the single reason for the growth of Ansible usage in conjunction with OpenStack. And it’s not just community, or Ansible’s simplicity, or versatility. It’s all of it, working together, compounding upon their collective successes.

Circling Back

Those of us who have spent time thinking Super Heady Deep Thoughts about open source and communities of practice have heard of “the virtuous circle” — a term which has many interpretations to be found on the internets (go figure!). But I find that this one, from this report on Sustainability in Open Source Commons, is spot on.  The author describes the virtuous circle as: “where good initial products attract users, which then potentially attract a new developer, which leads to more improvements. Our research clearly shows that successful projects have a potentially significant user community and that this user community drives project continuity.”

For the Ansible community, this pattern isn’t just limited to “Ansible + OpenStack.” I’d argue that it is repeating itself all over the place with a variety of Ansible + projects and/or practices. Containers. Working with orchestration tools like Kubernetes. AWS. Windows. Networking and its glorious SDN/NFV/TLA future. I could go on and on.

Ansible’s users and contributors consistently blow my mind in their usage of Ansible: how they are making it useful with other open source projects, and improving their day to day workflow. That they do so while also collaborating and sharing with other users and building communities of practice around Ansible EVERYWHERE just leads to more involvement, and more improvements, and more usage of Ansible all around. And enables those of us working for Ansible to do what we’ve always done: Follow our users — because, as users, they know what’s important and useful to them far more than we could ever dream to.

Which really means that Ansible and OpenStack — or Ansible and container orchestration — or Ansible and networking — isn’t thriving simply because of “the virtuous circle.”

It’s that the Ansible community has turned that circle into more of a virtuous venn diagram.

 

Ansible Extras Modules + YOU: How you can help. (It’s easier now!)

If you are a caring user of Ansible, and you meet any of the following criteria, this post is for you — because you can help to improve the quantity and quality of modules in Ansible Extras.

  • You are a user of, or contributor to, Ansible Extras modules
  • There is a pull request for an Extras module that you have been anxiously waiting to see merged (yours, or someone else’s!)
  • You’ve been looking for a way to contribute to the Ansible community
  • You are looking for fun and constructive ways to procrastinate doing other things you should be doing

In short: Our improved “new extras modules” review process is now in place, and any new Extras module can be reviewed for inclusion by any user of Ansible who cares to see that module be included.

Want to see a few of the modules that need love? Scroll down to the end!

***

Folks who keep an eye on the various Ansible repositories have probably noticed that your friendly neighborhood Ansible community team (that’s myself and Greg DeKoenigsberg) have been digging through a pretty sizable backlog of issues and pull requests, primarily in the Extras and Core modules repos. We’ve been doing this with a few things in mind; obviously, getting caught up as best as possible, but more importantly, making sure that the contributions of community members are being acknowledged and acted on. We value Ansible’s community members tremendously, and the last thing we want is for their hard work to be unused — or worse, for those people to feel demoralized and not contribute in the future.

With this in mind, we have been not only catching up on checking the status of each and every outstanding issue or pull request in the Extras repository — we’ve also been reviewing ways to reduce what is essentially “Github issues debt” (a play on the term “technical debt“). One of the main issues we’ve identified is, quite simply, bottlenecks in process. While the addition of myself to Greg’s team (formerly a team of one, now a team of two!) is helping with the day-to-day tending and triage of new and existing issues, which was a bottleneck in itself, the previous review process for new modules had a short list of approved reviewers, who not only have lives and are sometimes busy, but also didn’t always have the domain expertise with the technologies enabled in new modules.

And thus: A new process has been born. I encourage you to read the details, particularly if you are interested in helping with reviews, or are already contributing to Ansible, which were outlined by Greg on Friday on the ansible-project and ansible-devel mailing lists. That said, here are the important highlights:

  1. Any caring Ansible user can review new Extras modules.
  2. 2 +1 votes, and no -1 votes, will result in the new module being merged into Extras. More specifically, a +1 vote to the module working as expected (meaning: you have tested the module in good faith) — and a +1 vote verifying that the module follows the Ansible module guidelines.

Finally, as we outlined in the above referenced mail: this process is based on trust. Trust that users are testing these in good faith, and ensuring that the guidelines are being followed; trust that the submitters of these new modules are willing and able to maintain these new modules over time, and respond to issues and pull requests in a timely fashion.

Want to help?

You have my undying gratitude! And the endless thanks, I’m sure, of the module contributor. Here are some links to get you started:

Help get these Extras modules mooo-ving!

While this is in no way a full list of all outstanding new modules that need reviews, it is a list of those that aren’t already under review, require revision, etc. Some of them are entirely new; others are new modules that have had some review and revisions made, but are now stalled for lack of new review, and can now be approved under the new process. If you’re feeling particularly motivated and want to see if something you’re interested in is making progress or needs help — the full list of unreviewed or in-progress new Extras modules can be seen here.

And, yes: Honeybadger is first. Because I know you give a… ahem. Hoot. 🙂

Honeybadger: module to notify Honeybadger.io about app deployments.

Database Stuff:

Docker: docker_facts module to return information about running containers

Nexus: This adds a module for pulling artifacts from a Nexus repository.

Windows (Yes, Windows.)

AWS / EC2 / S3 / Redshift

Sensu: sensu_subscription manages Sensu subscriptions of the Sensu client running on a machine. Note: this goes well with the already-existing sensu_check module, which manages sensu checks and allows you to specify all possible options (including the correct types) — if you’re using this module, you’re probably a great candidate to review sensu_subscription!

nfsexport: module for working with entries in /etc/exports (or nfs exports file in an otherwise specified location).

Apache Kafka: The kafka_topics module creates new topics in Kafka, or modifies existing ones (though only increasing the partition count is supported in Kafka). Topics can be operated on one at a time, or in a group, to save on starting up the JVM for each topic to check its current state.

git-flow: Adds hooks for executing git-flow commands. Git-flow is a collection of Git extensions to provide high-level repository operations for Vincent Driessen’s branching model.

openweather fact gathering: module to use the openweathermap API to retrieve the current weather at a location.

Webfaction: module to gather facts from Webfaction, including facts about applications, websites, databases, and domains.

ZFS: a module for managing ZFS admin privileges.

ProfitBricks: module to create or restore a volume snapshot. 

Interesting Utilities:

PagerDuty: pagerduty_service allows you to create, update, disable, and delete services in PagerDuty. You can configure webhooks for the services, and prompt PagerDuty to regenerate service keys for API type services.

CoreOS / fleet:

OpenVZ:

Enabling Happiness.

Over the course of my time working in open source communities, I’ve become pretty good at a few things:

  • Listening
  • Connecting people, ideas, stories, and other communities
  • Getting participants to believe they can do awesome things, empowering them to make them happen, and getting roadblocks out of their way
  • Listening some more

Listening is first for a reason: Without it, there are no stories to share. No people to connect. It provides me with a bit of situational awareness for both the communities in which I participate, and other communities as well. And it helps me develop a sense of empathy for what people experience, both the good and the bad, either as end users or contributors. I like to know what is working — and I like to know what isn’t working, so it can get fixed.

The truth is: I love seeing people feeling happy. Feeling accomplished. It’s one of the best things about working in open source — knowing that your work, or even the work of another community, has helped to empower people to get things done. And as a good listener, I always take note of what other projects are making people happy — and as I’ve shared elsewhere previously, one of those is Ansible.

Ansible is helping a LOT of folks feel very happy — to the tune of nearly 12,000 stars on GitHub at the moment. Some of that can be attributed to its ease of use, allowing users to quickly implement things that previously had simply been out of reach. But more poignantly, the underlying theme to the stories that I’ve heard has been that because it’s easy, because they were able to quickly accomplish things, they now had more time to focus on the hard stuff: Culture. Learning to communicate ideas and stories across multiple teams. Breaking down silos. Ansible got out of the way and enabled them to be successful beyond just the technical bits.

And it’s built by an open source community that embraces the very same principles: empowering people to contribute by making it ever easier to do so, and then getting out of their way so they can get stuff done.

I’m incredibly delighted, and a little bit honored, to share that I’m joining the team at Ansible as a Community Architect. I’ll be doing what I love to do: listening, connecting people and ideas and communities, and making sure people can get stuff done. And doing so with an terrific group of folks, including Greg DeKoenigsberg, my boss-to-be and someone to whom I give great credit for telling me I could do amazing things (and then got out of my way.)

I officially start August 3rd, but in the meantime: you can catch me at OSCON next week, largely in the hallway track, and also running the Ansible BoF Thursday night. I promise to not keep you out pasture bedtime.

Come and find me and tell me your stories.

Join me for Elastic’s First Developer Hangout, TODAY! (June 12)

Since starting at Elastic last November, one of the things that Leslie and I have been wanting to do is start a Developer Hangout series. You know, one of those things where you can watch or participate in a live chat, with an actual human, in a setting that is less formal and more fun, but still gives you actual useful information. Now that we’ve grown our team to include amazing folks like Shaunak, Valentina, and Michelle, we’ve had time to actually launch this thing, along with our new Developer Newsletter — and I’m delighted to share that our FIRST EVER hangout will be TODAY, June 12, at 11:00am PDT. (That’s 18:00 UTC, 14:00 EDT, 20:00 CEST, for non-Pacific-time zone folks.)

Here’s the details on how you can join us today:

So what will we generally doing in our Hangouts? The format is pretty simple: Our sparkling-personality host (that’s me, this week!) will be joined by one of Elastic‘s awesome developers, who will be giving a short 15-20m presentation about something Amazing and Enlightening. And the rest of the Hangout depends on you, my friends: It’s an AMA format (that’s “ask me anything”, not American Medical Association, American Motorcyclist Association, or otherwise), which means that if you have questions on anything (within the normal bounds of politeness, that is!) for our guest star, you can ask them on IRC and we’ll queue them up to be answered. Our goals here are to be informative, and authentic, and not stuffy or sales-y.

My guest this week, Aaron Mildenstein, is one of Elastic’s developers for the Logstash project, and he’ll be sharing his story of how his involvement in open source, specifically Logstash, landed him a job doing open source work for a living on the project he loves. Now, of course, this very much mirrors my own story — volunteering in the Fedora Project on the Marketing team eventually turned into being employed by Red Hat to work on Fedora full-time — so it’s a subject near and dear to my heart. I got a sneak peek a few days ago of his presentation and he has some great perspectives that I had never thought about before, so I’m hoping that many of our viewers out there will learn a new thing or two (or twenty)! We’ll probably also wind up chatting a bit about Logstash and Curator — so there will be plenty of opportunities to learn new stuff. And of course, if you can’t make it, we’ll be sure to share how to watch it later.

Our future hangouts will be announced each week in the aforementioned Developer Newsletter — and will be exclusively for subscribers only, so you should DEFINITELY sign up. It’s easy to consume, with lots of pointers to information with brief descriptions, so you can quickly find the information that’s useful to you. (We were inspired by some of our favorite newsletters that do the same — including Gareth Rushgrove‘s DevOps Weekly, Matt Jaynes’ Briefs on Ansible, Docker’s Weekly Newsletter, which you should also totally subscribe to.) We’ll be bringing you the latest and greatest on all things ELK (Elasticsearch, Logstash, and Kibana) on a weekly basis!

In conclusion: I hope to see many of you today on IRC for our first hangout, and that you’ll watch along live — I’m a bit nervous, but totally excited to have a great topic for our first Hangout and to be finally officially launching this series. We’re really hoping to have some great questions, so, HINT, HINT, you should join us on IRC to ask those questions in #elastic-webinar as you follow along on YouTube. It’s a topic that both Aaron and I are super passionate about, and one that I know to be able to inspire great questions from many of my friends in the open-source universe. So be there! (We love moral support too. And feedback afterwards! If nothing else, you can watch as I try really hard to not accidentally drop foul language and enjoy that. :D)

distros and silos, devops and open source

Kris Buytaert, whom I’ve had the pleasure of meeting on several occasions at various conferences, recently wrote a blog post on the topic of systemd and devops. He is someone who has the experience of being very much hands-on with actual infrastructure in production (amongst many other talents and skills) — and insight about the various pieces that make up an organization’s infrastructure, from someone who also really understands the cultural aspects of devops, is incredibly valuable to me.

While Kris does dance around one of everybody’s favorite topics (systemd) he specifically avoids turning the post into yet another rant about systemd.  His main point: identifying that there is a gap in communication between OS developers and users. It may be lack of empathy, lack of feedback loop, etc. And Kris specifically points out that this is a starting point for discussing how to fix that gap.

Similarly, this blog post from myself is not meant to defend or justify decisions made. Nor is it to point fingers. It is to help build upon the discussion.

Silos

Having been the Fedora Project Leader up until fairly recently, I can certainly say that one of the things I worried about was this gap. If we in Fedora, and distributions in general, were becoming silos. Even though the Fedora Project is made up of contributors with skills other than development, I still worried that as a whole, we weren’t often looking outside the window to see how the rest of the world was doing work, to hear about their problems. I did a lot to try and open those windows, and share those stories, especially with the developers — because most of them have never had to manage large numbers of systems at scale. Most of them have never carried a pager that inevitably went off at the WORST POSSIBLE times. (Or dropped one down a toilet. While flushing. I did that. It was totally an accident though. Seriously.)

Bridging that gap is hard. Yes, distros could do a better job of listening to end users. This is where efforts like the OpenStack User Committee (because this issue isn’t limited to just distros) may prove to be incredibly helpful. It’s gathering and formalizing that commentary from users into something that is essentially seen as contribution to a community, in digestable form, rather than yet-another-angry-mail that represents an unknown portion of the population or community. And recognition as an actual contribution that contributes to a larger process, and ultimately success, can carry a lot of weight, particularly in communities that operate partially by meritocracy, as many distributions do.

Would something like this work in distributions? I don’t know. Perhaps it’s worth trying or floating around as an idea.

DevOps and Open Source

We talk about feedback loops a lot in the DevOps community. How to better stay in touch with and listen to and have empathy for end users or teammates. How to “build the right thing,” as the Lean Startup tells us, by showing our work as early as possible. Or, as folks in Open Source might say, “release early, release often” — because it enables transparency about what’s going on, and provides opportunity for earlier feedback and discussion.

I don’t think it’s a coincidence that DevOps and the use of open source projects seem to go hand in hand when it comes to success stories. The shared values of the DevOps and Open Source communities, when it comes to *how we practice our craft and do it best*, are often similar. Transparency. Why we document how and why things happened. Why we “release early, release often.” We all strive for continuous improvement.

You’d think it would seem natural to talk to each other. But we don’t. I think distributions are in some ways very set in their processes – and simply expect that if you want change, you’re going to show up and make it happen. And when it comes to the DevOps folks, the end users of distros and other open source projects, the focus on feedback loops tends to be with their end users — not necessarily their “suppliers.”

Where I go all Deming on y’all

I’m going to take a page from John Willis, aka @botchagalupe, for a moment, and refer you all to W. Edwards Deming and one of his 14 points for management.

“End the practice of awarding business on the basis of a price tag. Instead, minimize total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust.”

Now, if you’ve read Deming’s “Out of the Crisis,” you’ll know that this is largely related to manufacturing. Suppliers supplying you with products and parts that are screwed up or defective or not meeting your standards can SERIOUSLY screw up your day, as a manufacturer. But when you build a relationship, when there is loyalty and trust between the supplier and the user of those supplies, there is mutual understanding of roadmaps, and needs, and there is empathy and consideration.

In some ways, an open source project is the greatest supplier of all. You can see the incoming quality of code; you can see roadmaps and plans and delivery dates; you can file feature or improvement requests, and sometimes they will be implemented.

But the concept of a relationship builds upon that. (Yes, we all know relationships can be complicated, but generally the rewards are worth it, and this should be no exclusion.) Even Deming pointed out the differences between having “specifications” and knowing the whole story. And in this case: The whole story is — what are your problems? What are you trying to accomplish? What are your goals? What do you wish could be improved — whether in a distro, or in a project.

Those stories are what create empathy – and relationships. Done regularly (you know, like a loop) – those relationships can last.  Having stories, understanding the problems, understanding the practices people use on a daily basis gives developers the opportunity to not only develop empathy —  but also the opportunity to solve *the right problems*. And not go fixing things that may not actually be broken. And not feeling like the only feedback they ever get is people who are unhappy.

Generally: I think there are improvements to make all around in the closing of the gap, on both sides of the fence. But like many problems, it all boils down to empathy.

Fedora, Red Hat, RHEL 7, & Open Source. (Or: How RHEL 7 is literally “Beefy.”)

Many of you probably noticed (or were gleefully anticipating) the release of Red Hat Enterprise Linux 7 today. Which means it’s a really super day to be a Red Hat employee — seeing the culmination of so much open source work come together as the next major version of our flagship product is pretty inspiring.

Of course, I have a unique perspective on this process, having been the Fedora Program Manager (aka: schedule wrangler) and Fedora Project Leader over the Fedora 15 – Fedora 20 time frame, and RHEL 7 is largely based on Fedora 19, with bits of 20 pulled in as well. So much of what I’m reading today about the features and capabilities of RHEL 7 is very much a reminder of many points in those release cycles, and the effort and sweat the Fedora Project community put into that work. (And in some cases, blood and tears as well. Well, maybe not blood. But probably hot dogs.)

To give a bit more insight into this process, without truly taking you down the rabbit hole, here’s the short version of how Fedora integrates technologies, and serves as the upstream for Red Hat Enterprise Linux:

  • Hundreds of upstream project communities are working every day to improve their own code bases. At certain points determined by those communities, they release versions of their projects.
  • Fedora Project community members, who are often also involved with those upstream communities, will work to integrate new projects and updated releases of existing projects into its distribution, Fedora; and in fact, the inspiration to create new, innovative technologies in the Linux distribution space often evolves out of the community as well. Fedora is released approximately every 6 months, and strives to have the latest-and-greatest versions of those projects available. This makes for a fast-paced, cutting-edge distribution that offers a view into innovations that many folks have not otherwise tried.
  • Every few years, Red Hat Enterprise Linux will take a snapshot of Fedora at a time when they feel it has evolved a feature set that is compelling and rich in new capabilities that the market is ready for, and will shape that over time into a major release of RHEL.  In today’s case – RHEL 7.

I thought it would be fun to look back over the past several releases of Fedora and take a look at some of the most innovative features that were developed and integrated into Fedora over that time period which have now made their way into RHEL 7.

  • systemd (introduced in Fedora 15) – “a system and session manager for Linux, compatible with SysV and LSB init scripts,” to quote the project page itself. The project has continued to innovate since that point, introducing additional enhancements in subsequent releases, such as these in Fedora 19:
  • USB network redirection (Fedora 16) – the ability to redirect a USB device to another machine on a network. Most notably useful for connecting a USB device from one machine to another inside a qemu-kvm virtual machine.
  • Anaconda, the installer, got a major facelift in the form of a new UI (Fedora 18), and enhancements “under the hood” also enable easier integration of new storage technologies  in the future into the installation experience.
  • Storage management enhancements, including a command line utility, and library (libStorageManagement) that provides an open source storage API for storage area networks and network attached storage (introduced in Fedora 18.)
  • Virt improvements everywhere. Including:
    • virtio-rng (Fedora 19), making entropy available from the host to guests, preventing entropy starvation
    • Live VM migration without shared storage (Fedora 19), doing… well, just that. Eliminating the need for shared storage in a live VM migration.
  • High Availability & cluster changes and improvements, including the move from rgmanager to Pacemaker (Fedora 17).
  • Firewalld became the new firewall solution in Fedora 18, enabling firewall changes to be applied without rebooting (among many other features).

And that, my friends, is quite literally the tip of the iceberg. Over the course of Fedora 14, 15, 16, 1718, and 19, more than 250 features, upgrades, or significant changes (such as defaults) were made to packages in Fedora.  And as a result – features you’ll find in RHEL 7 have largely already received thorough testing and use, and are very much ready for prime-time, enterprise usage. (Which is why, as noted in the blog post title, RHEL 7 is Beefy — as “Beefy Miracle” was the release name of Fedora 17. I suppose the most accurate way to put it would really be to say that RHEL 7 has partial Beefy content.)

While not all of the changes or features made Fedora have made their way into RHEL 7, a significant portions of those that have are documented in the RHEL 7 Release Notes. And for those that didn’t, many of them have been made available through Fedora’s EPEL (Extra Packages for Enterprise Linux) repositories, thanks to the awesome Fedora Project community members who do this work.

So if you see someone playing with RHEL 7 today and they look a bit overwhelmed at all the newness, give them this tip: There may be a lot of new in RHEL 7, but Fedora is already building the future of RHEL 8, *right now* – so if they want to get a leg up on the next major release, or if they want to influence what that next release looks like, Fedora is the place to do it.

Introducing the new Fedora Project Leader, and some parting thoughts.

“I can resist everything except temptation.”  — Oscar Wilde

Recently, I announced my intentions to move onwards from the Fedora Project Leader position. Today, I’d like to share with the Fedora community, and the wider world, a few parting thoughts, and announce the name of the new FPL.

As many of you are probably aware, the FPL is employed by Red Hat, and the process of selecting an FPL is one that involves consulting with many folks both internal to Red Hat, as well as external, including consulting the Fedora Project Board. When I was approached by my former boss, Tim, as well as Jared Smith, our previous FPL, about the opportunity, there wasn’t a moment of hesitation before saying yes. It truly is an amazing opportunity to influence the Fedora Project community and the Fedora distribution, and more broadly, the pace of innovation in the larger universe of open source. I knew that the job was daunting, even all-consuming at times, and knew that many challenges would lie ahead, both for myself and the wider community.  But I also saw – and continue to see – tremendous potential, and had a million ideas already swirling in my head; while I certainly had the option to stay in my previous Program Manager role, I couldn’t possibly say no to the opportunity.

Of course, leadership doesn’t simply happen by being appointed to a position; one truly has to lead by example, by getting things done, and most importantly, by enabling and encouraging others to get things done, so that new leadership can continue to grow and flourish. One of the earliest questions I got after taking on the position was posed to me by Greg DeKoenigsberg, whom I now join in the “Former FPL Club”. And the question was this: “So. Who is the next FPL?”

While I really had no answer at the time — after discussion, it dawned on me that one of the most important parts of my job was to ensure resilience in our community; to ensure that we were nurturing new folks, so that when the day came and we were ready to move on to new things, either inside or outside of Fedora, there would be people ready and willing to step up to the task. Doing this is even discussed in The Open Source Way handbook, in the “Turn over project leaders regularly” section; the most poignant line stating, “There is no job in the world that cannot gain from a fresh mind and perspective.”

***

“Out of clutter, find simplicity.
From discord, find harmony.
In the middle of difficulty lies opportunity.”  — Albert Einstein

The Fedora Project is filled with opportunity; both for individuals to make a difference in a community, and for a community to make a difference in the world. Our embrace of open source principles, commitment to driving forward technology, and belief in our own Foundations keep the Fedora community engaged, enthusiastic, and perpetually moving forward.

The ability to bring people together, to unify ideas, to break down barriers, to find elegant and simple solutions to seemingly difficult problems, are just a few of the traits that a Fedora Project Leader can bring to the table to help guide the community forward. And I couldn’t be happier in announcing that Matthew Miller will be taking on the Fedora Project Leader role, as he has demonstrated over the past months and years his ability to gather the community around the Fedora.next initiatives, both from a technological and social standpoint.

Of course, Matthew is no newcomer to the Fedora Project, having been around since the *LITERAL DAWN OF FEDORA TIME* — he was an early contributor to the Fedora Legacy project, and helped to organize early FUDCons in his area of the world, at Boston University. Since joining Red Hat in 2012, he’s been responsible for the Cloud efforts in Fedora, and as the previous wrangler for that team, I was thrilled when he came on board and was willing and able to start driving forward some of the initiatives and wishlist items that team was working on. What started out small has since grown into a vision for the future, and I’m confident in Matthew’s ability to lead the Fedora Project forward into its next 10 years of innovative thinking.

And to you, lovely readers, and contributors to the Fedora Project Community: My heartfelt thanks goes out to you for your years of support, friendship, patience, and well-wishes as I move onwards; I have truly relished (ONE LAST PUN) my time as Fedora Project Leader.  I hope that you’ll all join me in congratulating Matthew on his new role, and I’m sure that his enthusiasm and fresh perspective will be of immeasurable value as Fedora moves into the future.

Thanks to you, I’m much obliged…

…for such a pleasant stay.

When one prepares to retire from the Fedora Project Leader position, there are two places in which to look for inspiration in writing their “departure advisory”:

  • Past notices of intentions to retire, such as those of my lovely predecessors Max Spevack and Paul Frields
  • Led Zeppelin lyrics

And thus, this blog post will draw a bit from both of those — but I will look to Page/Plant to kick it off:

“And to our health we drank a thousand times… it’s time to ramble on.”

(Note: A thousand times may be an inaccurate estimate.)

I’ve been in the Fedora Project Leader role for a bit over two years now, and was the program manager for Fedora for nearly a year and a half before that; needless to say, Fedora has been my full time and lots of my other time job for a long time now. Being in this role certainly is humbling and daunting at times, and amazingly gratifying at others, but it has also afforded me an almost overwhelming opportunity to learn about anything and everything going on in open source outside the Fedora universe, with the hopes of bringing those people, projects, and ideas into our folds. Some of it is incredibly interesting, and some of it brings incredibly creative thinking into solving problems that we face in the technology space today — and, like those before me, it has also led me inevitably into exploring new opportunities.

With Fedora 20 well behind us, and Fedora.next on the road ahead, it seems like a natural time to step aside and let new leadership take the reins. Frankly, I shouldn’t even say “the road ahead” since we’re well-entrenched in the process of establishing the Fedora.next features and processes, and it’s a rather busy time for us all in Fedora-land — but this is precisely why make the transition into new leadership as smooth as possible for the Fedora Project community is so important.  It’s a good time for change, and fresh ideas and leadership will be an asset to the community as we go forward, but I also want to make sure it’s not going to distract us from all the very important things we have in the works.

I’ve informed the Fedora Project Board already of my intentions, and my friends, Red Hat management and family are all aware and supportive of my decision to move onwards. Red Hat engineering and management, as the employer of the FPL, will obviously be involved in the transition process, and the Fedora Board will continue to be advised and consulted during the process as well. While what it is *exactly* that I’m doing next is still to-be-determined, I will be sticking around to help with transition tasks, general FPL-edification, and generally ensure a smooth turnover into the New World, after the proverbial torch is passed.

And “after” is a key word here, of course: Today is not my last day, or anything like that. I’m just letting everyone know of my plans to, well… Ramble On.

Stay tuned for updates.

Fedora, Red Hat, and investing in the future

It was just about 4 years ago that I hopped on a plane to go to Raleigh, North Carolina to go meet up with some folks and work on Marketing Things for an open source project that I had recently started contributing to, called Fedora.   The Fedora Marketing Team was having a FAD (Fedora Activity Day) – and I was sponsored to come out, get things done, watch some hockey, and eat some barbeque.  With the exception of my significant other, this was pretty baffling to most of my family and non-internet friends; not about what one might expect, which is, “You’re doing things for free?” – but mostly, “They’re *paying* for you to fly out there?” 

Flash-forward to the present – and while I certainly didn’t know everything that I now know today, standing in the Fedora Project Leader shoes, four years later – my answer is still remarkably similar: Red Hat invests in Fedora because it is the upstream for Red Hat Enterprise Linux. 

Red Hat’s investment in Fedora is significant; more than a dozen people support Fedora’s community infrastructure, both “people” and “technology”, in their full-time roles as Red Hat employees. Hundreds of engineers who work on open source projects upstream of Fedora integrate their work into the releases we do every 6 months.  Budget is provided for collaborative events, such as Fedora Activity Days, and FUDCons & Flocks, as well as for equipment, bandwidth, swag, event sponsorships, media, and other various services. 

Of course, being the upstream for RHEL means that Fedora is much more than simply an *integration* point. The Fedora Project community is made up of contributors from countless viewpoints and interests, both in terms of contributions and use cases. If you’ve read “The Lean Startup,” you’re familiar with the notions of “build the right thing,” and “faster feedback loops”; Fedora provides this exact model which has enabled the success of RHEL. Our rapid, 6-month cycle enables Fedora to quickly integrate the latest and greatest technology advancements – and to backtrack, tune, or adjust how those features work based on feedback in time for the next release.  This process has in turn enabled Red Hat to produce a release of RHEL every three to four years that is not only consumable by their enterprise customers, but is also expected to meet their current technological needs.

The Fedora Project recently celebrated its 10th anniversary – and its 20th release – of developing the operating system we know and love as Fedora. Over those 10 years, the technology landscape has changed dramatically, not just in terms of what and how things are produced, but also in terms of how they are consumed. It’s not particularly a chicken-and-egg situation, but more simply an evolution where technology and use have grown together. 

  1. Breadth, complexity, and velocity: We’ve seen the emergence of compute virtualization, cloud, big data, virtualization round 2 (The Network Edition), and containerization technologies, one right after the other – primarily propelled forward by technologies developed in open source communities.
  2. Agility and resilience, in both business and infrastructure: The ability to consume ever-increasing volumes of information – either about your business, or your infrastructure – and rapidly make decisions based upon that data, and *act*, is what separates successful organizations from dysfunctional ones. Increasingly, people are not building culture, or infrastructure, with permanence in mind; the need to be agile also drives the need for resilience – the ability to bounce back from failure.   More specific to infrastructure technologies, the ability to abstract, simplify, and automate enables the ability to scale in size and more rapidly develop New Stuff – which has manifested itself in a emerging sea of packaging, configuration, orchestration, and other glue-ish tools for infrastructure, many of which were born from the need to more efficiently deal with the operating system.  Organizations strive to build the right thing, the Fedora Project included, and choice abounds when it comes to technologies to enable that building.

The Fedora.next initiative is paving the way for Fedora 21 and beyond; to the most casual of onlookers, the biggest change from previous releases is the shift to building purpose-specific versions of Fedora – namely, Workstation, Server, and Cloud-image products – rather than the “one Fedora to rule them all” release that we have produced in the past.  This is, essentially, putting us far closer to “building the right thing” than we’ve ever been; it helps us to make the technologies we develop more consumable for our users and contributors, and enables a tighter feedback loop on what we are producing in a world where the pace of technology is moving at warp speed. And Fedora’s success in shifting focus to a more diverse audience via a change in product set directly enables Red Hat and other companies to have more successful projects themselves.

And speaking with my red fedora on – Red Hat, of course, does hope to benefit from these new purpose-specific products and the emerging work around them. Just as a single-purpose Fedora has helped select technology for today’s RHEL, Red Hat hopes this diversity will do the same for future RHEL. The communities that are springing up around Linux and open source development have become very diverse, and so have Red Hat’s customers, and product lineup. The more appeal we generate with Fedora for those communities and use cases, the more value Fedora adds to the cycle of participation and integration. Since Red Hat’s engineers end up working on many parts of that cycle through free and open source upstreams and integrating in Fedora, it’s no surprise they’re interested in helping Fedora get these new products well thought out via the working groups. Bettering Fedora’s appeal also directly impacts Red Hat’s ability to build its ecosystem and thereby bring even more participation to, and investment in, Fedora.

All that said – our own need in the Fedora community to build resilience and agility, in both our infrastructure and culture/community – are key to successfully launching three products. The process isn’t going to be as easy as flipping a lightswitch (sorry, folks!), but rather more of an evolution. Many new things are already underway in terms of new technology – such as our work on coprs, collaboration with Docker, or the (IMO) exciting work going on in the Cloud SIG around atomic upgrades  – as well as rethinking some of our existing processes around how we build and test our products. As we navigate through this process, our fearless program manager, Jaroslav, will be helping to coordinate and plan how all of these pieces fit together – and I encourage you to keep an eye on those planning details and dependencies so that we can deliver a Fedora that is prepared for the next 10 years of technological innovation.

I’m a fan of the concepts behind the new purpose-driven products, and I encourage you to bring constructive inputs to the mix. Of course, I’m also delighted for people to bring contributions around the products —  just as we’ve done for our past 20 releases. It’s an exciting time for Fedora, and a great time to be involved and to influence the next 20 releases to come. (Or more!)

Welcoming the CentOS Community to the Red Hat family.

Welcome, CentOS community folks, to the wider family of Red Hat sponsored community projects.

Just a short bit ago, Red Hat and the CentOS Project jointly announced the creation of a formal, collaborative relationship, which effectively (for lack of a better metaphor) “adopts” the CentOS project into the family of other Red Hat-sponsored communities such as the Gluster Community, OpenShift Origin, the JBoss Community, and of course, the Fedora Project.

From the perspective of Daddy Shadowman, this is Big News, of course; from a community perspective, frankly, it’s something that I think should have been done long ago.  I know that many people, myself included, have friends contributing in one way or another to CentOS, or contribute themselves, and have long considered CentOS to be part of our ecosystem; having the “blessing,” and support, of Red Hat, is something I see as a Good Thing. More about those Good Things shortly. In the meantime:

If you haven’t read the FAQ, I encourage you to do so. I know that lots of folks generally assume that an FAQ is not going to have a lot of information, but in this case it is actually quite replete (in fact, I have joked that when printed, it weighs approximately 6 pounds), and will likely answer any questions that people might have. For those interested, there is also a webcast with Brian Stevens, our lovely CTO, at 5pm Eastern; and of course you can head on over to the CentOS Project website to get more information. (Or to get acquainted, if you aren’t. But seriously; I know you are. Come on.)

Despite the plethora of available information, I expect that there may be folks within the Fedora Project community who will have questions above and beyond the answers provided in the FAQ. The Fedora Project just recently celebrated its anniversary of 10 years as a community; both Fedora and Red Hat have grown tremendously during those 10 years, and the Fedora Project’s evolution as a community, and what Red Hat has learned during that process, has paved the way for many of Red Hat’s other communities’ successes. But more pertinently: the Fedora Project is a community that deeply cares not just about ourselves, but also about other communities, and about the state of free and open source software in general. And thus, I know some questions that may arise may come not only from our own experiences as a “Red Hat sponsored community project”, but also out of our deep knowledge of “how the sausage is made,” so to speak, and curiosities may be sparked about various technical implementation details. I’m happy to answer those questions where I can, either personally, or on the Fedora Board list; other questions might be more appropriate for other groups, such as the Infrastructure team, or even on the CentOS mailing lists themselves. I trust that most folks within the Fedora Project can figure out where to direct such questions.

That said – I’m happy to provide a bit more Fedora-related context, in the hopes that it might appease curiosities, and also because I would hate to see a perfectly good roll of tin foil go to waste on an unnecessary hat. 🙂 And so, a few points follow:

  1. The new relationship between Red Hat and the CentOS Project changes absolutely nothing about how the Fedora Project will work, or affect the role that Fedora fulfills in Red Hat’s production of Red Hat Enterprise Linux. Fedora will continue to set the standard for developing and incorporating the newest technological innovations in the operating system; those innovations will continue to make their way downstream, both into Red Hat Enterprise Linux, CentOS, and many other -EL derivatives.
  2. Those of you who are Fedora Package Maintainers are not now suddenly obligated to maintaining anything in the CentOS Community.  Additionally, this does not affect Fedora’s EPEL work; this will continue to be something that the Fedora Project provides, as long as it wishes to do so.
  3. The Fedora and CentOS communities are not going to be “forced” somehow to work together.  Obviously, there exists a number of places where we have overlap in processes, build infrastructure software, and the like, and we certainly have the opportunity ahead of us to cooperate and share when it makes sense. The CentOS folks will be having a more transparent build system, and building out a release and infrastructure community – areas where we have expertise in what is incredibly similar tooling; similarly, they also have deep pockets of expertise in various types of automated build testing that haven’t become a critical part of Fedora’s culture yet. As I said previously – there are already numerous friendships forged between members of these two communities, and I would expect that over time, the things that make sense to collaborate on will become more obvious, and that teams from the two respective communities will gravitate towards one another when it makes sense.

In short: Nothing is really changing for those of us in the Fedora Project, at least in any way that we don’t choose to change ourselves. But 10 years of our own evolution as a project certainly doesn’t mean that we’re done growing, learning, changing over the next 10 years, and beyond. As the CentOS Project continues to nurture and grow its own community, I expect that many of those community members will naturally more interested in understanding how to influence the future of RHEL – the thing that eventually becomes CentOS – which is, of course, the space where we in the Fedora Project shine. While this was possible before, the “blessing” by Red Hat allows the CentOS project latitude that didn’t really exist before as far as “reaching out.” The great opportunity for Fedora now is not only to help those community members make that trip over the bridge from the downstream community to our upstream community, but also to tap into the wealth of end-user expertise and hands-on experience that is had by the collective community of CentOS users – and seriously, THERE ARE A LOT OF THEM – and to really listen, to create a feedback loop from those ultimate end-users back to the developers who are creating what will become the next generation of Red Hat Enterprise Linux. And make it even better.

(Those are those Good Things to which I previously referred, BTW.)

I hope that everyone in the Fedora Project can join me in welcoming CentOS to the Big Happy Family.  I talked to Karanbir Singh, my counterpart in the CentOS project, on the phone yesterday, and expressed this, but it’s something I mean from the bottom of my heart, and isn’t just for him, or my other new coworkers (Jim Perrin, Johnny Hughes, Fabian Arrotin – welcome, guys!) — but really, for all of the extended CentOS Community: I really hope that this goes smoothly for you guys. And if you have questions, about anything – I’m here, and I’m sure many others in the Fedora Project will be here too. We’ve been down many of the paths that you guys will see in the future – and hope that you guys can benefit from our past experiences. So don’t hesitate to ask. Really.

Congratulations to all of you.