Open Source community building: a guide to getting it right

[Everyone – from carriers to OEMs – is busy building developer communities. But many have failed and more have seen disappointing results. Guest author Dave Neary looks at what lessons history can teach us on community building and the key DO’s and DON’Ts.]

Open Source community building: a guide to getting it right

Community development in open source software is not just for geeks in sandals nor for niche Linux companies any more. It’s mainstream and it’s here to stay.

The recent analysis of companies contributing code to the Linux kernel shows that large companies including Novell, IBM, Intel, Nokia and Texas Instruments are getting serious about engaging in community development. Organisations such as the LiMo Foundation are encouraging their members to work with community projects “upstream”, that is, with the community rather than in isolation,  to avoid missing out on millions of dollars worth of “unleveraged potential” (PDF link).

A diverse developer community is critically important to the long term viability of free and open source projects. And yet companies often have difficulty growing communities around their projects, or have trouble influencing the direction of the maintainers of community projects like the Linux kernel or GNOME. Sun Microsystems and AOL are prominent examples of companies which went full speed into community development, but were challenged (to say the least) in cultivating a mutually beneficial relationship with community developers. There are many more examples – but often we never even hear about companies who tentatively engage in community development, and retreat with their tail between their legs, writing off substantial investments in community development. Xara, for example, released part of their flagship software Xara Xtreme for Linux as open source in 2005, before silently dropping all investment in the community project in late 2006.

What can go wrong? What are the most common, and the most deadly errors which companies make in their community engagement strategies? And how can you avoid them? Avoiding these does not guarantee success, but failing to avoid them may be sufficient to guarantee failure.

Where to begin?
The easiest and gravest error that companies make is to sprint headlong into free/open source development with unrealistic expectations.

The history of free & open source software development is filled with stories of companies who are disappointed with their first experiences in community development. The technical director who does not understand why community projects do not accept features his team has spent months developing, or the management team that expects substantial contributions from outside the company to arrive overnight when they release software they’ve developed. Chris Grams once described the Tom Sawyer model of community engagement – companies who expect other people to do their job for them. Make sure you don’t fall into that trap.

Doing community software development well takes time, even when you get everything right. And there are a lot of things you can get wrong.

So where to begin? Before you start community development, you should have thought about what you want to get out of it. Is Open Source a way to grow the brand and broaden distribution of your product, with the goal of generating leads? Do you need to grow an ecosystem of developers building on top of your platform? Do you want to include an existing project into your product to reduce costs, but customise it to fit your needs? Each of these goals, and any of the other reasons people develop software in the open, require specific strategies and tools tailored to the situation to succeed. Indeed, how you measure success will change depending on your goals.

The two common situations company find themselves in are collaborating with an existing open source community, or growing a community around a piece of software that you are releasing.

Joining a community
When joining an existing community, building trust and reputation takes time. The first step to working productively with a community is to understand the structure of that community. Who are its leaders, what are its priorities? If the culture of a project does not align with your business objectives, that may affect your decision to engage with it in the first place.

If you find that you can work with the project, and that the general goals are aligned (or at least, not misaligned) with yours, then the hard work can start. For example, Hewlett-Packard backed Linux early, at the expense of promoting its own proprietary Unix, HPUX. Ten years on, HP now ships close to 40% of all Linux servers. In contrast, Sun Microsystems decided to create an independent community around Solaris in 2005, releasing OpenSolaris under an Open Source approved licence which is incompatible with the GPL (the licence of the Linux kernel). The Sun sponsored project failed to create a substantial independent developer community from its launch until the acquisition of Sun by Oracle and subsequent closing of the OpenSolaris project in 2010.

Once you make the decision to collaborate, and you have chosen the project you want to work with, the first and most important decision is who will work on the project. This consideration often does not get the attention it requires from top management. The engineers who will be working on the project on your behalf will be representing your company. It will be their job to build trust with project maintainers, navigate the project’s roadmap process to ensure that their work is accepted upstream, and ensure your business objectives are met.

The choice of the people who will work with the community is particularly important; as Stormy Peters, former Executive Director of the GNOME Foundation, once wrote, companies are not people. In other words, companies can never be members of a software development community, although their employees may. Companies can be valuable institutional partners for projects, but to quote the Beatles and Karl Fogel, money can’t buy you love (or community support).

So now you have some engineers working with the community. What next? Havoc Pennington wrote some excellent advice in 1999 for engineers working with community projects. The one-line summary might be: “when in Rome, do as the Romans do”.

Often communities will have documented their norms – many projects, including the Linux kernel and modules in the GNOME project, have “HACKING” files under source control documenting expectations for contributions, and mailing list policies. For most communities, these can be summarised as “go with the flow, don’t rock the boat”. Miguel de Icaza, founder of the GNOME project and vice president of developer platforms at Novell, has written an article explaining the reasoning behind these policies.

One temptation which you should avoid at all costs is to leverage the trust which one contributor has gained to channel contributions from others into the project. This will only promote Shy Developer Syndrome in your team.

By all means, have your senior community guy mentor others in the team and help them through the process, but avoid making that mentor a gatekeeper, shielding the rest of your team from the community. Attempting this will always backfire when your gatekeeper moves on or when the community finds out that he’s committing the work of others and circumventing community norms.

Growing a community
Looking at the second scenario; growing a community. If you do decide to release software under a free software OSI approved licence, your first choice will be whether to set the project up as a community project or not, and to what level.

Simon Phipps has written about the different types of communities which can grow around a free software project. He describes communities of core codevelopers, non-core developers who work on add-ons, integrators who distribute and configure the software, but don’t necessarily modify it, and finally users of the software. Each of these communities have different needs, and require different approaches.

If you want to grow a community around your project, there are a few best practices you should follow:

Control: If you opt for rules ensuring that you decide what code will be added to your product’s core, you will lose many of the benefits of community projects. Some examples of rules which come from a desire to maintain control are a requirement to assign copyright for all contributions to the core product to you, or ensuring that only employees can commit directly to the main branch of your core product. There are many good reasons to maintain ownership of the core, but this decision will severely handicap community contributions. This does not prevent you from developing other types of community, however, such as a community of add-on developers or integrators.

Barriers to entry: Barriers that contributors have to overcome can come in different shapes: using unusual tools, requiring convoluted processes for bug reporting, feature requests  or patch acceptation, or legal forms you may ask people to sign before contributing.

Tools and infrastructure: Ensure that you provide your users with the opportunity to distribute their work and connect with other users – whether this be through a forge for modules, or through the use of Gitorious of Bazaar for source control. Contributing in your project should be seen as a social experience.

Community processes: Create a just environment – no-one likes to be considered a second class citizen. Document processes for gaining access to key resources like bug moderator permissions, commit access to the master branch, or editor access for the project website.

Budget appropriately: Commit the appropriate resources – building a community takes time and effort, and that means investment – primarily of human resources.
Having one guy who is the community manager dealing with the community and a team of 10 developers behind the corporate walls isn’t going to cut it. As Josh Berkus of PostgreSQL said in his “How to Kill your community” presentation, if your nascent community feels neglected, it will just go away.

Launching a new project is like launching a new product – except that acquiring a new community developer takes much longer, and is much more difficult and costly than acquiring a new user. In the same way that companies track SAC for new product launches, tracking the Developer Acquisition Cost (DAC) for your project is a key metric in evaluating whether you are doing the right things to grow your community.

Developers have lots of projects to choose from, and they tend to gravitate towards projects where co-development is the norm. So you have to be thinking about the contributor experience, and the value proposition to external contributors, all the time.

A clear and compelling vision, with lots of opportunities to contribute, and low barriers to collaboration, can help reduce the acquisition cost of community contributors, and similarly reduce the cost of acquiring new users and paying customers.

Avoid common anti-patterns
If Best Practices are behaviours that should be adopted, community anti-patterns are best practices gone wrong. If the reasons behind a “best practice” are misunderstood, you can end up imitating behaviour without getting the desired result, much like the Pacific cargo cults, building airstrips and hoping that planes land. Like seasoning, adding too much can ruin the dish.

In general: when you see the following patterns happening, you should work to counter-act them, both in the communities you participate in, and in your corporate citizen behaviour within those projects. Each of these patterns are common and tempting, because they represent best practices applied in inappropriate circumstances. And each of them results in a net reduction of community health.

Some common anti-patterns you should avoid are:

1. Command & Control – communities are partnerships. Companies are used to controlling the products they work on. Attempting to transfer this control to a project when you want to grow a developer community will result in a lukewarm response from people who don’t want to be second class citizens. Similarly, engaging with a community project where you will have no control over decisions is challenging. Exchange control for influence.

2. Water cooler – when your team gets too much work done in private, your community will not understand your motives and priorities. By working on mailing lists or other publicly readable and archived forums, you allow people outside your company to get up to speed on how you work.

3. Bikeshed – A “bikeshed” discussion is a very long discussion to make a relatively minor decision. When you feel like the community is dragging you down, know when to move from talking to doing.

4. Black hole – It can be tempting to hire developers who have already gained reputation and skills in projects you build on. Beware when hiring developers from the community – it may be that the community will be worse off. Ensure that working in the community is part of the job description.

5. Cookie licker – Picture a child who has had enough cookies, but wants to save the last one for later. So they take it off the plate and lick it, to ensure no-one else will eat it. The same phenomenon exists for community projects – prominent community members reserve key features on the roadmap for themselves, potentially depriving others of good opportunities to contribute. Beware of over-committing, and leave space for community contributions in project roadmaps. Be clear on what you will and will not do.

Happy Community Gardening
Community software development can be a powerful accelerator of adoption and development for your products, and can be a hugely rewarding experience. Working with existing community projects can save you time and money, allowing you to get to market faster, with a better product, than is otherwise possible. The old dilemma of “build or buy” has definitively changed, to “build, buy or share”.

Whether you’re developing for Android, MeeGo , Linaro or Qt, understanding community development is important. After embracing open development practices, investing resources wisely, and growing your reputation over time, you can cultivate healthy give-and-take relationships, where everyone ends up a winner. The key to success is considering communities as partners in your product development.

By avoiding the common pitfalls, and making the appropriate investment of time and effort, you will reap the rewards. Like the gardener tending his plants, with the right raw materials, tools and resources, a thousand flowers will bloom.

– Dave

[Dave Neary is the docmaster at maemo.org and a long-standing member of the GNOME Foundation. He has worked in the IT industry for more than 10 years, leading software projects and organising open source communities. He’s passionate about technology and free software in particular.]

Mobile Virtualization – Coming to a Smartphone Near You

[mobile virtualisation is an underhyped yet far-reaching technology. Guest author Steve Subar looks at virtualisation and how the technology will be elemental in enabling mass-market smartphones]


Imagine one phone with two personalities – one to fit your personal life, the other for business.  Instead of carrying around two or more devices, you’d be able to access multiple virtual phones on a single handset.

This article introduces mobile virtualization and the range of its use cases, with implications that span from silicon to smartphones to shrink-wrapped software to operator services.  It also expands upon two key applications: building mass-market smartphones, and enabling secure mobile services.

What is Mobile Virtualization?
Virtualization is new to mobile, but established in the data center, fundamental in cloud computing and increasingly popular on the desktop.

Mobile Virtualization lets handset OEMs, operators/carriers and end-users get more out of mobile hardware.  It decouples mobile OSes and applications from the hardware they run on, enabling secure applications and services on less expensive devices today and deployment on advanced hardware tomorrow.

Virtualization provides a secure, isolated environment for operating systems that is indistinguishable from “bare” hardware. This environment is called a virtual machine (VM), and acts as a container for guest software. A software layer called a hypervisor provides the virtual machine environment and manages virtual machine resources.

Resources and performance of mobile devices differ markedly from data center blades and desktops. So do business requirements. Mobile virtualization is different from virtualization used in enterprise and personal computing in several ways:
Hardware Support: mobile virtualization focuses on silicon deployed in mobile handsets, primarily ARM architecture CPUs.  By contrast, most enterprise and desktop-hosted virtualization targets versions of the Intel Architecture.  Moreover, Intel and AMD augment server and desktop CPUs with virtualization support functions, in contrast to silicon in phones that does not (yet) include these capabilities
Guest Software: Data center and Cloud virtualization usually hosts multiple instances of a single guest OS:  thousands of Windows or Linux VMs.  Desktop-hosted virtualization usually invokes just one.  Mobile virtualization involves running multiple, diverse guest platforms: applications OSes (Android, Linux or Symbian), low-level RTOSes for baseband processing and other system chores, and also lightweight environments for specialized processing (shared device drivers, security code, etc.).
Performance: enterprise virtualization strives for maximum throughput for guest software loads.  Mobile virtualization must also enable real-time response for latency-sensitive baseband and multimedia processing on resource-constrained mobile silicon.
Suppliers: enterprise virtualization is dominated by offerings from VMware, Microsoft, IBM and Citrix and supported by open source projects like Xen and KVM.  VMware and Parallels supply the desktop-hosted market.  While several vendors field embedded virtualization technology (Wind River, Greenhills) only a few focus on mobile virtualization – VirtualLogix, Trango (now part of VMware) and Open Kernel Labs.

Use Cases
Mobile virtualization is a flexible technology with a range of use cases:
– BYOD: lets you Bring Your Own Device to work, and switch among multiple virtualized environments, isolating personal and corporate applications and data.
– Chipset Consolidation: merging multiple CPUs into a single processor running application and baseband stacks, to reduce BOM costs and simplify design. Lower BOM costs could enable a new wave of mass-market smartphones, shipping in greater numbers and driving growth in data traffic and ARPUs.
– Legacy Software Support: in a new handset design, running unmodified, previous-generation software (e.g., a pre-certified baseband stack) in its own virtual machine
– Security: using multiple VMs to isolate software stacks from one another, e.g., securing mobile payments or protecting programs used to access business-critical enterprise assets from untrusted open OSes and software
– Multicore Support: managing available processor cores and mapping physical CPU resources onto “virtual CPUs” running actual software loads
– Energy Management: shutting down CPU cores when they are not needed and migrating running guests to remaining core(s)
– MNO Branded Services – using secured VMs to host operator-branded services
– Mobile-to-Enterprise Virtualization (M2E): – using secured VMs to host enterprise applications and provide access to business-critical corporate assets, e.g., hosting the Citrix Connector to access a virtual enterprise desktop
– Rapid Deployment: let OEMs and operators/carriers launch new versions of existing devices and rollout new services offerings on existing mobile hardware

Most mobile OEMs and operators/carriers look to mobile virtualization to address a combination of use cases.  Let’s examine two of particular interest:  mass-market smartphones and secure services:

Mass-Market Smartphones
Smartphones increasingly drive the global mobile ecosystem. According to Gartner, total mobile phone shipments in 2009 surpassed 1.2 billion, of which 172.4 million units were smartphones, an uptick of 23.8% over 2008.

Smartphones are critical to the fortunes of mobile OEMS, MNOs, chipset suppliers, and providers of applications and services – they drive data traffic, improve hardware margins, expand silicon design-wins, and drive software sales through app stores to increase post-load revenues.  However, broader adoption of smartphones has been slowed by retail pricing of smart handsets and cost of accompanying data plans.

A mass-market smartphone offers smartphone capabilities at a feature-phone price point. To deliver such a high-functioning yet low-cost device, OEMs must deploy a full-featured open OS and applications on more modest mobile hardware.

Current smartphones utilize high-end chipsets with dedicated CPUs for application and baseband processing. This approach contrasts with featurephones, where both stacks run on a single CPU and simpler embedded OS (Real-time operating system – RTOS).

Virtualization enables OEMs to build smartphones with less expensive single-core chipsets (see figure).  Such chipsets can also enable using lower-cost components for other functions (display, battery, etc.) not compatible with high-end mobile silicon.

The mass-market smartphone is more than just a concept touted by visionaries. Real devices have been delivered, such the Motorola Evoke QA4, with more to come.

Secure Services
Mobile virtualization also facilitates a range secure services, enabling enterprise-grade security on standard handsets. Virtualization can help secure mobile platforms, applications, and services by keeping trusted software to a bare minimum – the hypervisor itself and carefully chosen additional components – and then isolating them from threats arising from vulnerabilities and faults existing in today’s complex software stacks.
Virtual machines, containing a bare minimum of essential software, can be dedicated to secure services. A single phone could contain a virtual machine optimized for execution of secure services, deployed side-by-side with other mobile software, with practically no incremental BOM costs.

Secure service examples include:
– Isolating software for mobile payments and banking
– Hosting secure access to private medical records
– Providing a platform for secure access to business-critical corporate data (as in BYOD and M2E above)
– Enabling secure voice calling by isolating VoIP stacks from open OSes

Building mass-market smartphones and deploying secure services with virtualization are complementary use cases and emphasize doing more with less:  virtualization enables deployment of smartphone capabilities on lower-cost hardware; it also makes possible the introduction of new secure services on currently-available mobile devices.

Overcoming Challenges to Adoption
As illustrated above, mobile virtualization offers a flexible solution to many design and deployment issues for devices and services on them.  Despite its many use cases and successful deployment in products shipping in volume, mobile virtualization faces systemic challenges to even broader use:
– Perception of the technology as a viable alternative to legacy solutions, e.g,. a software solution to delivering lower BOM costs or to providing security
– Concerns about performance overhead
– The need to integrate mobile hypervisor as pre-load software, on a per-device basis (as opposed to post-load, application-style deployment)

These challenges are gradually being overcome;  mobile OEMs and operators/carriers are increasingly attracted to the use of virtualization to bring down the cost of Android devices, while recent performance benchmarks at key OEMs have tempered concerns about the performance overheads.

Mobile virtualization has been shipping in mobile phones since 2009. Despite challenges to adoption, the mobile/wireless ecosystem is turning its attention to this flexible technology, especially to bring down the cost of building and buying smartphones.  Coupled with emerging needs to provide secure services on mobile devices, mobile virtualization should play a key role in the deployment of the next 500 million phones.

– Steve

[Steve Subar is the President and CEO of Open Kernel Labs, a mobile virtualization firm]