State of the Developer Nation 23rd edition: the fall of web frameworks, coding languages, blockchain, and more!

It’s the most wonderful time of the year! Yes, the beginning of the “Merry” season but also the time when new insights from the world of developers come to everyone’s house (magic may or may not be involved)!

Stay up to date with the 23rd edition of the State of the Developer Nation report and get the insights you would only pick up by slashing through data with your own two hands.

Our 23rd Developer Nation global survey reached more than 26,000 developers in 160+ countries and its findings are bundled in a free “State of the Developer Nation” report. 

This research report delves into key developer trends for Q3 2022:

  1. The state of blockchain development
  2. Students’ top career aspirations
  3. Language communities – An update
  4. Why developers contribute to vendor-owned open-source projects
  5. Types of studios game developers work for
  6. The rise and fall of web frameworks

In addition to outlining the report’s major findings, here are a few key takeaway points to spark your curiosity:

The state of blockchain development

  • 25% of developers are currently working on or learning about blockchain applications other than cryptocurrencies. 
  • Developers with 6-10 years of experience in software development are the most likely to be working on blockchain projects.
  • Though Ethereum is the dominant blockchain platform, it is the only one more popular among learners than those currently working on blockchain applications.

Language communities – An update

  • Javascript remains the largest programming language community, with an estimated 19.6M developers worldwide using it.
  • In the last two years, Java has almost doubled the size of its community, from 8.3M to 16.5M. For perspective, the global developer population grew about half as fast over the same period.
  • Kotlin and Rust are the two fastest-growing language communities, having more than doubled in size in the past two years. 

The rise and fall of web frameworks

  • Web developers who use frameworks are more likely to be high-performers in software delivery than those who don’t.
  • Web developers are gradually settling for a smaller number of frameworks as they stop experimenting with a wide range of tools.
  • React is currently the most widely used client-side framework and its adoption has remained stable over the past two years. By comparison, jQuery’s popularity is decreasing rapidly.  

As you’ll notice, most of the trends we discuss in this report are takeaways from how developers use technology. Our goal is to share these insights with the world to help guide the next generation of development. 

You can download the full report for free and access all data and insights within.

If you need additional information or looking to understand developer preferences’, please get in touch with us and we will dive into it together.

Who is using low-code / no-code tools?

This is a chapter from our latest State of the Developer Nation 22nd Edition, which is free to download. You can watch our Lightning Session on the key findings and also read below for the whole report and insights on low-code / no-code tools.

Low-code/no-code (LCNC) tools provide a visual approach to software development, abstracting and automating parts of the application development process. This allows those without prior software development experience to create custom applications and provides potential time- and cost-saving for professional developers. In this chapter, we investigate the extent to which developers are using LCNC tools, showing differences according to professional status, geographical regions, and experience levels.

When it comes to reducing development overheads, addressing the challenge of finding skilled developers, and accelerating taking software to market, LCNC tools are becoming increasingly attractive. The sophistication of these tools is increasing rapidly, providing the potential to significantly disrupt the software industry. This begs the question, to what extent are developers1 using LCNC tools for their development projects?

We begin by separating developers according to their professional status – differentiating professionals from non-professionals, who are hobbyists and/or students. We excluded from our sample those who indicated that they were unsure about what share of their development work was done using LCNC tools. Just over half (54-55%) of developers in each group report that they are not using LCNC tools at all for their development work. This proportion is marginally lower for non-professionals who are students (55% of those who are exclusively students and 53% who are students and hobbyists) than non-professionals who identify as exclusively hobbyists (57%).

46% of professional developers use low-code/no-code tools for some portion of their development work

State of the Developer Nation 22nd Edition

The proportion of developers who do use LCNC tools does not differ across groups (46% of professionals vs 45% of non-professionals). This highlights that LCNC tools are finding traction among those less likely to be familiar with coding and that use-cases within professional software development are also common.

As experience increases, developers are less likely to use LCNC tools at all. This is particularly true among those with more than ten years of experience. These tools are often framed as being best suited for simple programming tasks. Hence, the complexity of development work assigned to more experienced developers may be less appropriate for LCNC approaches. Furthermore, experienced developers are likely to have mastery over simpler coding tasks, which leaves little room for the efficiency gains that LCNC tools are often heralded for.

Using LCNC tools without a degree of accompanying manual coding is highly uncommon across all experience levels. The proportion of developers who use LCNC tools for a small amount (up to a quarter) of their development work remains relatively constant (between 17-24%) across the experience spectrum. Therefore, LCNC’s most likely role is as an occasional adjunct to existing coding tools, regardless of developers’ experience.

Experienced developers, particularly those with more than 10 years of experience, are the least likely to use LCNC tools

State of the Developer Nation 22nd Edition

More extensive use of LCNC tools, i.e. for between one-quarter and three-quarters of all development activity, peaks slightly for those with around three to ten years of experience, revealing that it is early to mid-experience developers, rather than newcomers who are most likely to elevate LCNC tools’ status to essential. This is perhaps due to the recognised career importance of gaining traditional development experience, before reducing reliance on writing code. Only 2-4% of developers across all experience levels use LCNC tools for 75% or more of their development tasks, indicating that it is highly uncommon to shift the balance heavily towards LCNC-driven development.

Our data reveal notable differences in adoption and engagement with LCNC tools across different geographic regions. The Greater China area emerges as the region in which developers are most likely to be using LCNC approaches. 69% of developers in this region report using LCNC tools, compared to the global average of 46%. This suggests that the Chinese LCNC tool market has transitioned from an introduction phase to a growth phase. According to Mendix’s State of Low-Code report, IT professionals in China are the most likely to suggest that low-code is a trend their organisation can’t afford to miss (84% compared to 72% globally). Non-developer, or citizen developer, audiences also likely account for a large part of LCNC’s growth. However, as in all regions, the majority of bona fide software developers in the Greater China area currently use LCNC tools for less than half of their overall development work. It remains to be seen whether their reliance on such tools will also expand as the market and tools mature.

19% of developers in North America use Low-Code/No-Code tools for more than half of their coding work – almost twice the global average of 10%

North America has the second-highest LCNC tool adoption rate and stands out for the proportion of developers using LCNC tools to conduct more than half of their overall development work – 19% of developers here report that their use of LCNC tools outweighs their manual coding (comprising 13% using them for half to three-quarters of development work and 6% using them for more than three-quarters); almost double the global average of 10%. Hence, North America appears to be at the forefront of the LCNC movement, providing the strongest evidence that these tools can supplant traditional development approaches – even in a region where 81% of developers identify as professionals.

South Asia, the Middle East and Africa, and East Asia excluding Greater China are all above the global average in terms of LCNC tool adoption. Despite considerable uptake in these regions, LCNC products have not matured to the point where their use is a dominating feature of developers’ processes. Regions such as Western Europe and Israel, Oceania, Eastern Europe, and South America are all below the global average in terms of LCNC tool adoption.

The shortfall in these regions is particularly linked to smaller than average proportions using LCNC tools for more than 25% of their development work. The proportion using them for less than a quarter of their work is more comparable to the global average, suggesting that the market is still in its introductory phase in these regions – developers are evaluating the tools but are yet to rely on them for a substantial portion of their work.

Access the full free report to dive into insights on:

  • Language Communities
  • Understanding Developer Personalities
  • Who is using low-code / no-code tools
  • Spotlight on China and the Rest of East Asia
  • How developers generate revenue
  • Emerging technologies

If you have questions about the data above, want more or want to explore other topic areas we cover, talk to us.

Google has the leading developer program, but Amazon is catching up

Developers. Decision-makers. Kingmakers?
For several years now, at SlashData we have been helping our clients – some of the biggest names in tech – to understand how their developer programs measure against the competition. Twice a year, we run an extensive and wide-ranging global survey to understand who developers are, what tools and resources they use, and where they are going. Developers share with us their experiences with vendors’ resources – which ones they use, how often they use them, and how happy they are with the experience. We also dig a little deeper into what developers value in vendor support, resources, and communities.

Our research shows that developers are becoming increasingly involved in all stages of the decision-making process. Not only are they writing specifications for vendors and tooling choices, but they are also influencing decision-makers and budget holders. If software is eating the world, then developers are writing the menu. 

To attract developers, many tech companies are actively investing in Developer Relations (DevRel) teams and developer marketing activities. They are creating an abundance of resources, training programs, technical support, events, and community activities. It’s not always clear which activities should be priorities and how resources should be allocated to achieve long-term strategic goals. We are here to help.

Our Developer Program Benchmarking research tracks 20+ of the leading developer programs, and captures developer sentiment across more than twenty developer program attributes, ranging from documentation and sample code to mentoring programs and access to experts. In so doing, it helps DevRel and developer marketing practitioners understand how their developer program compares against the rest.

Here, we give you a snapshot of the state of play for these developer programs. We use three KPIs to create a 360° overview of how each developer program performs:

  1. Adoption – How many developers use a vendor’s resources
  2. Engagement – How frequently developers engage with the resources
  3. Satisfaction – How developers rate their experience using the resources

bubble chart showing how developer perceive the leading developer programs

We can see that the Market Leaders; Google, Microsoft, and Amazon highly engage and satisfy developers. Their market share – or adoption rate, shown by the size of the bubble – reinforces their market-leading position. In fact, when we take a longer-term view of this data, it becomes clear that Google and Microsoft have long been the market leaders, staying at or near the top of the table for all three KPIs. 

Recently however, Amazon has made considerable progress. In fact, Amazon’s developer program has been growing faster than the global developer population, which is currently 24.3M (you can explore more in our developer population calculator), while Google and Microsoft’s share has dropped slightly. When you take into account the large increase in Amazon’s satisfaction score and their aggressive growth strategy, the top table positions don’t seem so assured.

Our data also uncovers the Satisfying Specialists – these developer programs are often small and focused. Unity, Red Hat and DigitalOcean sit firmly in this space. Developers don’t need to engage frequently with these vendors’ resources, but when they do, they have an excellent experience. For these vendors, low engagement is not a cause for concern, though it does come with its own challenges – when developers have fewer touchpoints there are fewer opportunities to speak to them or to influence their behaviour. For these (and other) vendors with low engagement, messaging becomes vital. 

The Under-realised Value segment contains developer programs that, although having high engagement amongst developers, are being held back by their low satisfaction ratings. These programs are often (though not always) small, and the vendors here have a clear imperative to improve their developers’ experience. Thankfully, with developers engaging frequently with the resources there are ample opportunities to effect positive change.

But what, exactly, to change? 

This brings us to the true power of our Developer Program Benchmarking research. Not only do we understand how developers engage with vendors’ resources, but we also know which resources are important to developers, and how satisfied they are with the resources that companies provide. 

Though developers’ preferences change and evolve, some things stay constant. Of the twenty-plus resources that we ask about, documentation & sample code, tutorials & how-to videos, and development tools, integrations & libraries have consistently been rated as the most important resources that companies should offer. This shows that developers are focused not only on getting things done, using documentation and development tools to speed up the development process, but they also highly value having the opportunity to learn. We can see this repeated further down the list – training courses & hands-on labs provide the learning opportunities, whilst technical support allows them to lean on experts when they need to.

Table showing the 5 resources: documentation, tutorials, development tools, training courses and technical support

In this way, we can tell which resources developers value, and how their experience matches their expectations. This information, when combined with our wealth of survey data on demographics, firmographics, technology choices, motivations, skills, and much more, becomes incredibly powerful for informing strategic planning. We help some of the leading tech companies in the world to understand precisely which resources need improvement, and which developers will benefit most from such improvements. Have you ever wanted to know how to tailor your tutorials to the right level of complexity? Have you ever tried to decide how to localise your content? What about marketing to enterprise developers, what do they care about? 

We also go a level deeper. For many developer programs, we specifically ask developers how they use resources relating to different products or disciplines. For example, we help developer programs to understand whether or not they are vulnerable in the cloud compute market, or what are the specific preferences of developers using IoT resources. Once again, coupled with the rest of our rich and diverse data, this information allows you to create a finely tuned strategy that allocates resources efficiently and effectively.

With developers having such power in the decision-making process, this is a win-win for everyone involved. By understanding what developers value, you can tailor your offering to suit their needs, increasing retention, growing your audience, and ultimately, adding to your bottom line. SlashData are the analysts of the developer nation, and we can help you understand developers.

You can download a preview of the latest Developer Programs Benchmarking here.

Developer Research 101: The right methodology for reliable survey data

Suddenly a fine day dawns when your organisation’s key stakeholders agree that you need data to understand your developer audience. Well, ok, most likely that didn’t exactly happen overnight – in fact we know* that nearly 20% of DevRel practitioners struggle to justify the budget of their developer programs and 32% rely on qualitative arguments. But let’s skip that part for now, and fast-forward to that happy moment when there is full buy-in for data-backed developer strategy decisions. Right. You need data. But what data?

First, ask the right questions

Let’s pause here for a second. At SlashData we may have data in our DNA, but we know that plunging head-first into data is not where your quest for answers should begin. Your first step should be, instead, to ensure that you are asking the right questions. However trivial that may sound, you may discover (as many others have) that in fact it is not. If you get the questions wrong, the answers will be meaningless and the time and budget you will have invested in finding data to answer them will be wasted. 

Knowing exactly which questions you need answered will help you specify not only what data you need, but also where you should get it from, and how big a sample you should aim for. If it is just a total market share figure you’re after, for example, chances are you don’t need that many data points – neither in terms of sample size, nor in terms of breadth of information collected. If however you are trying to understand what developer personas (or segments) exist out there, where they are located, how they feel about different technologies, and where they’re going next, you’re looking into an undertaking of an entirely different magnitude, and may the Force be with you. Or, more practically, SlashData, as we have been in this business for more than a decade now.

Mind the source

Once you know which questions you need to answer, you should carefully choose your data collection method and sources. If, for example, you want to know how your technology is currently being used, you could use your own telemetry or usage data, or survey your own community. If, however, you need to see what other technologies your developers use, or how competing technologies are being used by your broader audience, you need to look beyond your own community by means of a global survey that targets everyone in your market. Based on our bi-annual survey of developer-facing organisations, we find that about half use their own survey data, and nearly half run qualitative research, when in fact it may be more appropriate to use a different approach. 

graph showing how developer program developer relations find information

A common, though less-than-ideal approach, for example, is tracking developer sentiment (usually in the form of Net Promoter Score) based on data collected from current users that interact with you (say, through your website). While this may be a good indication of how your current active users feel, it can not be generalized to represent the sentiment of your whole target audience, as it omits the views of past users who have now left you, and also the views of those who evaluated but rejected your technology in favour of a competitor. Those who abandon a technology are more likely to give it a low recommendation score if asked. By omitting their views and using only current users’ scores you therefore get a positively biased result. This is particularly true in highly competitive low-rigidity markets (such as some cloud services can be) where your current users are more likely to be satisfied fans who stay with you by choice rather than due to technology lock-in effects, while the displeased have already left you to turn to one of your competitors. 

That is why in our surveys we always ask both current and past users how they feel about each of the technologies they either use, or have (recently) stopped using, or evaluated but rejected. In this way, we get an unbiased estimate of developer sentiment for the broad range of technologies that we track – allowing us to benchmark them with a high degree of confidence.

It’s not only the size that matters

“Can we discuss sample size already?” I hear you cry. We shall in a moment, I promise, but we need to get something else straight first. You should be collecting a sample of… what exactly? An equally – if not more – important consideration to size is the representativeness of your sample. To use a crude example, there is no point buying a truckload of bananas, when what you’re looking for is apples. Similarly, there is no point collecting an impressively big sample from the US and India alone, for example, when what you’re interested in is global trends. And that’s because developers in different parts of the world behave very differently when it comes to technology choices, as they have different motivations and business models, and may be at different stages in their journey as they form part of developer ecosystems of varied maturity. Our data proves again and again that there are vast differences between regional developer communities. That’s why we go into great lengths, each and every time, to survey developers from more than 150 countries, so that we may truly gauge the pulse of the global developer community.

gra[h showing slashdata global developer reach

It’s not only regional diversity, though, that you should carefully balance your sample for. There are several other attributes you should consider, such as the mix of professionals, hobbyists and students, and the size of the organisations that your surveyed professionals work for. The latter is particularly important if, for example, you wish to capture the views of both enterprise developers and startups that are bound to be working with different tools. Demographics such as age may also be important, as you may want to hear from both the young coders, who typically use some technologies more than others (open source software is a prime example here), and from the seasoned developers who may have a deeper understanding but also higher expectations of the tools they use.

graph showing survey takers age

You should also ensure that you don’t repeatedly rely on the same pool of developers, say a panel, no matter how big. In such a fast-paced industry, behavioural patterns and user profiles may change without warning. By repeatedly surveying the same people over time, you risk failing to observe the change originating from a different pocket of the developer population than the one your panel comes from. And if you do fail to observe the upcoming trend, you will miss the opportunity to ride the wave of change. This is particularly true for the emerging sectors such as augmented and virtual reality, but also for more ‘exotic’ technologies still in the early stages of their lifecycle, such as DNA computing, self-driving cars, or body-brain computer interfaces.

As we track all of these and many more, we reach out to capture the experiences and the intent of developer populations of all shapes and sizes, from small local meetups to large vendor communities. Our surveys are promoted by more than 70 leading community and media partners each time, and we make sure they are not the same 70 every time, to ensure we are not repeatedly hitting the same pools (or communities) of developers. And while we reach out afresh to the developer population each time, we consistently observe meaningful trends in our data – rather than wild jumps – which proves that we do indeed capture a representative view of the software development industry.

Last but not least, be careful of any incentives you offer to survey takers. These must be carefully designed to appeal to all profiles within your target audience, or you risk creating selection bias, i.e. attracting only developers of specific profiles, rather than a random sample of all developer profiles out there.

Is your data clean?

But no matter how hard you try, you’re bound to get some sample bias – beware of anyone who says they don’t. How do you deal with it? 

First, particularly if you are offering incentives, you should clean out all fraudulent – or simply illegitimate – responses. There will always be those who are in it only for the prize, randomly clicking through your survey and diluting results. They may even build smart bots to do that (after all, we are talking about developers here). At SlashData we have developed sophisticated ML algorithms that identify such responses and unceremoniously throw them out. Based on the metadata that our bespoke survey-taking platform tracks, we are able to outsmart the not-so-honest respondents and call them out.

Then, it’s a matter of correcting for over-represented groups. It could be that, despite your best efforts, you attracted disproportionately more hobbyists than you should have done, for example. Or perhaps word got around in a particular language community about this cool survey, and slightly more enthusiasts than what you had hoped for came forth to vouch for their favourite programming language. How do you fix those imbalances? Especially given that you don’t know what the true (or population) proportions are – since that is the very thing you’re trying to estimate. In such cases, some – very very careful – data weighting is in order. 

You have to be extremely careful (have I stressed this enough already?) as to how to go about it, first to identify the sources of bias, and then to decide how to correct it without introducing over-correction. But this is a rather long story, and we’ll keep it for another day. 

All I shall say here is that at SlashData we treat all the different channels through which we get our data (such as our network of 70+ partners mentioned earlier) as independent samples, which we then compare across a set of parameters which we know may introduce bias. We use ML models to specify the level of correction that should be applied, and take into account all types of bias that a single response may be simultaneously carrying. 

What is your margin of error?

We get that question a lot. I hope that by now I have demonstrated that, although important, this should not be your only concern. In fact, the margin of error can be quite misleading if used as the only metric to assess sample and research quality. Let me give you some statistical insight that might shed some light on this problem.

As a quick search can reveal, the margin of error is designed to measure uncertainty in random samples. More specifically, the theory of the margin of error (MoE) applies, strictly speaking, only to questions (but could, under certain circumstances, be generalised to full surveys), and only to perfect random samples. This implies that if the assumption of perfect randomness does not hold (and in the real world in most cases it doesn’t), then the theory collapses and your MoE estimate is meaningless. To go back to our crude example of obtaining a truckload of bananas as a sample of apples, just because you have a truckload (and from a large truck at that), your margin of error estimate will look satisfyingly low. Your calculation, however, will have not accounted anywhere for the fact that these were in fact (loads!) of bananas, not apples, and as such, they make for a useless sample, albeit a tasty one.

That is why at SlashData, instead of just quoting margins of error that when used in isolation may misleadingly inflate confidence in a sample, we focus our efforts on obtaining a sample that is as big, as random and as robust as possible. These are, in fact, the three elements that do lead to a reliable estimate of a margin of error. In other words, it’s not enough to only quote a margin of error. One should also be able to demonstrate that the underlying assumptions of the MoE calculations, namely randomness and normality, are met to a satisfactory degree. So if you’re out there shopping for survey-based research, make sure to first scrutinise any potential sellers for the health of their outreach and sampling methodology. Only then, if satisfied, ask about the margin of error. 

Go for a large sample you can dig into

All that said, sample size is, of course, very important. To continue the margin of error discussion, suppose you are faced with a choice of two random samples (that is, samples you can be reasonably sure are close to random). If they both come from the same population, say the global developer population, then, at any given level of confidence, their difference in margin of error will lie in the sample size. Based on our robust developer population sizing research, there are currently (as of Q1 2021) 24.3 million developers in the world. That means, that even at a 99% confidence level, our sample of more than 19,000 developers from across the world yields a margin of error of less than 1% at the question level. If instead you had, for example, a (random) sample of 2,500 developers, your margin of error for the same question would be around 3%. 

But having a low margin of error is not the key reason for which you should aim for a large random sample. The main reason is having the ability to dig deeper and slice the data, while still having enough sample left from which to confidently draw conclusions. If, like us, you run unsupervised models, random forests and other ML models to identify developer segments and predict their technology choices, then you need large samples to do it. Otherwise, you end up with a really thin sample that is anything but reliable with regards to the picture of developer personas that it paints. Even if you’re into simply tracking trends for subpopulations of interest, you still need a big-enough sample. In our data dashboards, for example, we give you the option to filter for many attributes, such as age, region, professional status, gender, decision-making power, and much more. If we were to start off with a small sample, filtering would leave you with a tiny, and therefore useless, sample size. For example, filtering in our Developer Population Calculator for those under 25 years of age, who are students, and have up to five years of experience, still leaves us with nearly 4,000 respondents to draw conclusions from.

Are you lost? 

Here’s a cheat sheet:

  1. Ask the right questions. Make sure you accurately specify what business questions you need answered, and by which audience.
  2. Select the data collection method (such as a large-scale survey, telemetry, qualitative research, etc.) that is best suited for the problem you’re trying to solve. 
  3. Carefully design your developer outreach to obtain a sample that is representative of the population you are interested in. 
  4. Aim for a large sample, so that you may confidently dig into it, if you need to.
  5. Clean your data from illegitimate, or even fraudulent responses.
  6. If you’re confident enough that you have a random sample, estimate your margin of error – at the question, not survey, level.
  7. Check for sample bias and correct for any obvious deviations from randomness, without overfitting (or over-correcting).

In short, as you may have guessed by now, the art of research design and developer outreach is not for the faint-hearted. And it can not be wrapped up in a margin of error figure. But fear not. With more than 10 years of experience in mapping the developer ecosystem through large-scale surveys we are here to help. All you have to do is get in touch.

*Based on our Developer Program Leader surveys. Have your say on the latest one.

A more visual walkthrough of our methodology:

Return on Developer Investment

My most fun job ever was as a C++ developer. Ok, I don’t have much grey hair yet, but I fondly remember the late 90s and the challenges of writing a background synchronisation application on a Compaq iPaq. And reverse engineering Mozilla’s Navigator into an XSLT parser.

My second most fun job ever has been building a company that helps the world understand developers, with research. We’ve come a long way – and a few pivots – from surveying the pulse of 400 developers in 2009 to 30,000 developers annually in 2016. That’s a lot of data – in fact more than our analyst team can chew.

It’s a privilege to be working with some of the biggest names in tech – I ‘ve learned a lot the past 2 years. Earlier this month, Amazon, Microsoft, Facebook, Adobe, Intel, Oracle and many more joined our first Future Developer Summit, and shared some of their best practices in how they work with developers. I ‘d like to share some the learnings here.

Return on Developer Investment.

You would think that with billions of dollars spent every year on building tools for developers, running hackathons, loyalty programs, tutorials and how-tos, evangelist and MVP programs – the platform leaders would have figured it all out. Yet, with so much money being spent on developer tools and marketing there is no standard for measuring the Return on Developer Investment.

Most companies represented at the Future Developer Summit shared how they measure success. At their inception, developer-facing orgs measure success by number of developers touched – but that’s a meaningless metric, a dinosaur from the age of print marketing. Some platforms are using NPS (net promoter score), polling their active developers once a year for how likely they are to recommend the platform. Many are informing product decisions based on developer comments (“will you ever fix that”?) – you’ll be surprised how many decisions are taken based on “the devs that I spoke to said..”.

Other developer relations teams are measuring success through the number of apps in the store, and the number of apps using signature APIs. In the case of open source projects, a popular metric is GitHub stars, forks and commits over time. The more sophisticated platforms track the Return on Developer Investment funnel from SDK downloads to app download and use. But there isn’t a consistent way to measure how the investments in hackathons, tutorials, how-tos, loaner devices, evangelism programs and some many more developer-facing activities are paying off for the likes of Google, Amazon and Facebook.

Quality of apps, not quantity.

Another theme of the Future Developer Summit was the need for quality, not quantity of applications at the start of an ecosystem. B2B ecosystems like Slack and Intuit prioritise quality; Poorly written messaging apps can damage not just the perception of Slack, but also the perception of chatbots in general. Similarly, a poorly written app for the QuickBooks platform can wreak havoc to sensitive financial data for thousands of small businesses. As a result both Slack and Intuit have very stringent app review processes, including weeks of testing, usability and security reviews. To improve quality for bots, Slack has pioneered a “Botness” program, bringing together bot platforms and leading bot developers; the aim is to “make bots suck less” i.e. improve the bot user experience and avert a long-term damage to the reputation of chat bots. There are already 250 members signed up and the next event is on November 4 in NYC .

The next Future Developer Summit will focus on best practices for developer relations. If you ‘d like to be part of the invite-only audience of platform leaders, register your interest at www.futuredeveloper.io

 

Best Practices for a successful IoT Developer Program

Events and training programs are a main component in many developer programs for IoT – but just how effective are they?

This infographic sheds some light into the effectiveness of training and events, based on our Best Practices for IoT Developer Programs report.

IoT_9thEdition_Infographic

The RoI of developer events

Developers deeply value the community they belong to. With this in mind, real-life events are surely the ultimate, high-touch way to get together. Or are they? Our new report uses data from 3,150+ IoT developers to shed light on the matter.

IoT_DP_illustration

[tweetable]Developers deeply value the community they belong to[/tweetable], and use community resources including open source communities and Q&A sites (such as StackOverflow) every day to find information, stay up to date, and get professional support from their peers on the tools, platforms, and APIs they use.

This is one of the clear conclusions of VisionMobile’s new report on Best Practices for IoT Developer Programs, which explores what IoT developers value most in a developer support program. For this report we surveyed 3,150+ IoT developers from 140+ countries in our 9th edition Developer Economics survey – the largest research to date on IoT developers.

If developers value the sense of community so much, then real-life events are surely the ultimate, high-touch way to get together. In our experience events are often the focal points of developer programs – and a big budget-eater! It’s worth looking closely at which developers attend the different types of events, and which don’t.

In general, events like conferences, seminars, workshops, Meetups, and hackathons are a mid-range source of information for developers. Between 10% and 30% of developers attend them, depending on the type of event and the developer segment. Workshops and conferences are the most popular, each a source of information for 22% of developers, followed by Meetups (18%) and hackathons (16%). In other words, [tweetable]you reach only about a fifth of the developer population with events[/tweetable]. The expectations towards developer programs to organise events are even lower: only 8% of IoT developers consider events to be a key feature of the support program.

tune-events

It is a good practice to tune the events you organise or support to your specific developer audience. For example, developers working on Data Mashups value the formal knowledge transfer offered by seminars, trainings, and workshops (+10 percentage points relative to other developers), and to a lesser extent conferences (+4 pp). In contrast, device makers value the opportunity for playful exploration offered by hackathons (+5 pp).

Similarly events are, by and large, an enterprise affair. Developers working in large organisations are significantly more likely to attend events of any kind. This includes hackathons, which are often considerably less formal events than conferences or seminars.

Events have limited reach and are certainly not the activity with the highest ROI in a developer program. They should be considered carefully before including them in the program mix. This said, they can be a valuable addition when they are centered around PR and networking, i.e. community building, and optimized for the right audience.

In the full report, we look in detail at what Internet of Things developers need and expect from your program, beyond the obvious activity of organising developer events. We show how Internet of Things developers can be an important ingredient in your business model, but also how competition for their attention is fierce. We discuss the best practices in supporting your developer constituency by fiercely attacking friction points and by fostering community. We also discuss how developers prefer to get educated in your technology, what role money and commercial opportunities play, and how you can reach out to developers in an effective way. Our data from 3,150+ developers lays out a roadmap for the creation of a solid developer program, in tune with developer needs. Get it here.

North American App Developer Trends 2014: Insights into the app economy powerhouse

North America plays a very central part in the app economy. Not only is it home to the companies that create all of the leading mobile platforms, it is also the largest creator of app revenues. We estimate that [tweetable]in 2013, North America contributed 42% of the world’s app economy output[/tweetable]. Developer mindshare in the region is also considered particularly valuable by OEMs and tools vendors. This is due to the disproportionate global shares of both venture capital and media coverage focussed on the region. North America is often the starting point for new developer trends with high smartphone penetration and relatively mature 4G networks.

07-NA-App-Economy 1200x900

For those that see value in understanding developer trends and preferences in North America we have created a new report which compares the region to the rest of the world. The report covers developer mindshare for platforms, languages and tools, as well as revenues and deeper dives into enterprise and game developer markets. It answers questions like these:

  • Why are developers in North America more likely to target mobile browsers than those in the rest of the world?
  • Android mindshare is higher than iOS in North America but by how much?
  • Despite lower mindshare, iOS is prioritised by more North American developers than Android but how many?
  • How much more revenue does a developer in North America earn on average than one elsewhere in the world?
  • How is that extra revenue distributed amongst the developer population and across platforms?
  • Which revenue models are most popular and which are the most successful in North America?
  • Enterprise developers in the region make significantly more revenue than those targeting consumers – how many times greater is the average revenue?
  • Which revenue models do these enterprise developers favour and what’s their share of the total revenue pie?
  • Games are also monetised differently than other apps, which are the most popular revenue models for North American game developers?
  • Ad networks are the most popular category of tool globally but not in North America – what’s more popular there?
  • What’s the breakdown of developer tool usage across platforms in the region?

The North American App Developer Trends 2014 report includes many more insights and explanations of key trends. It is also packed with 20 graphs, slicing the relevant data in different ways. If you need to know more about developers in the region, then this report is for you.

Which apps make more money?

[How do app developer revenues vary by country, or platform? Does the number of platforms make a difference to app revenues? Which models bring in the most revenues? We revisit our November analysis of app monetisation with more insights from our Developer Economics 2013 survey across 3,400+ developers – while launching our latest survey, which is available here]

New Developer Economics survey

Back in November, we looked at which apps make money based on research on how app revenues vary by platform, app category, country and more. In this article we update our analysis on app monetisation based on the latest research from Developer Economics 2013 across 3,400+ app developers, including analysis that did not make it into the report.

We ‘re also proud to launch our very latest Developer Economics survey, which reaches across thousands of app developers and provides the data for our famous state of the developer nation reports. Thanks to the sponsorship by BlackBerry, Mozilla, Intel and Telefonica it possible to provide these reports and additional insights, for free, to the entire mobile community.

Take part in the survey, spread the word and help us drill deeper into the app economy and what makes it tick. We have prizes aplenty for developers, with 7 devices up for grabs (one iPhone 5, two Samsung Galaxy SIII, two Nokia Lumia 920 devices and two BlackBerry Dev Alpha handsets) – plus an AR Drone 2.0, a Nest Learning Thermostat and a Nike Fuel Band for participants who also subscribe to our developer panel. Last, but definitely not least, our friends at Bugsense are giving away one month of free crash reporting to each and every participant.

[ab_testing prettylink=’blogDS13′] Continue reading Which apps make more money?