Developer Economics 2012 – The new app economy

[The latest Developer Economics report is now live – this is the third in the report series that set the standard for developer research and focuses on five main areas: The redefinition of mobile ecosystems, Developer segmentation, Revenues vs. costs in the mobile economy, App marketing and distribution and Regional supply vs. demand of apps. Developer Economics 2012 is available for free download at www.DeveloperEconomics.com, thanks to the sponsorship by BlueVia!]

Here’s just a sample of the key insights and graphs from the report – download the full report for more!

– The new pyramid of handset maker competition. In the new pyramid of handset maker competition, Apple leads innovators, Samsung leads fast-followers, ZTE leads assemblers and Nokia leads the feature phone market. Apple has seized almost three quarters of industry profits by delivering unique product experiences and tightly integrating hardware, software, services and design. Samsung ranks second to Apple in total industry profits. As a fast follower, its recipe for success is to reach market first with each new Android release. It produces its own chipsets and screens – the two most expensive components in the hardware stack – ensuring both profits and first-to-market component availability.

Continue reading Developer Economics 2012 – The new app economy

The Future of Voice

[Is telco voice innovation dead? Or will smartphones and LTE deliver a much wider breadth of voice applications? Guest author Dean Bubley argues that ‘voice’ is about to experience several discontinuities as it goes beyond  our limited notion of ‘telephony’.]

VisionMobile - The Future of Voice

Telecom operators are facing a huge problem: in developed markets, we are close to the point of “Peak Telephony” – or maybe even past it already. Peak Telephony – inspired by the notion of reaching “Peak Oil” production – refers to the point after which voice revenues will face terminal decline. Already the traditional fixed and mobile telecoms industry is potentially facing a bleak outlook as call volumes stagnate and prices are eroded. While fixed operators have long recognised the threat to their core telephony business, mobile networks are now also facing the inevitable as well. Globally, over 70% of wireless operators’ revenues still comes from voice services and SMS, so this is an existential threat – one which threatens profound change to business models and even extinction of some operators as we know them today.

To an extent, many older telcos had a ten-year extension granted to them by the rise of mass-market mobile services. These appeared at exactly the right point, just as fixed voice prices (especially long-distance) started suffering the competitive onslaught from early VoIP players. But at a group level, declines in fixed-line profits were offset by the rise in mobile. The inherent value of mobility, and the convenience of handsets loaded with easy contact-lists and call-registers, postponed the onset of saturation and substitution.

But now, finally, a combination of Moore’s Law, devices and the Internet are catching up – mobile voice is about to experience several discontinuities and radical change in coming years.

The limitations of “distant voice”
For the past 100 years, we have pretty much only had three ways to communicate over long distance between people: letters, telegraph and telephone (from the Greek words for ‘distant sound’). The traditional phone call has been wonderfully transformative and yet, at the same time, very limiting – even in mobile guise. It has enabled revolutions in both commerce and society greater than virtually any other invention since the wheel and printing press.

But phone calls do not correspond to the way humans really communicate with each other. We don’t generally think of conversations as “sessions”, or measure their value in terms of their length.

In essence, we have surrendered our natural modes of communication to the restrictions of telephony. We have boiled down “distant voice” interactions into Person A calling Person B for X minutes, via numbered identifiers. Compare that to the more normal style of “close voice” of dropping in and out of conversation, with interruptions, breaks in the flow, background tasks, simultaneous interactions with other people and so forth – using our names.

Normal in-person conversation is enhanced by non-verbal communications, physical context and a multiplicity of other factors. We use a broad range of volume levels, tonal frequencies and gesticulation. Some conversations are synchronous, some asynchronous – people talking over each other, or speaking in turn, perhaps based on relative authority or another social construct. Some are unique to the specific two people or particular cultures, others are generally accepted universally. In a crowded room, we might hear snippets of other conversations, by chance or deliberately, through eavesdropping.

The phone call has been an excellent lowest-common denominator baseline for “distant voice”. Telecom operators have profited immensely from its enablement, especially with the enhancements of mobility and the “wrapper” of a cellphone and its user interface. But in doing so, they have provided us with a single speech product that is intended to span myriad use cases and social/business needs. Only a few other distant-voice technologies have emerged to address niches: push-to-talk, voice messaging, walkie-talkies, CB radio and private radio systems addressing fringe-cases such as taxi dispatch or public safety services.

But now, the landscape is shifting. The combination of smartphone platforms, thriving developer ecosystems, smartphones, PCs and the Internet have enabled new communications formats to evolve. These formats can map much better onto natural human communications preferences. We no longer need to constrain our innate ways of interacting, because of the constraints of a piece of wire (or air) and a switch. We can “politely interrupt” with a soft alert or IM before escalating to a call, locate team-mates in virtual worlds with stereo cues, or interact directly with a voicemail for simple tasks, rather than calling back.

We already have in-game voice chat between players, remote baby monitors, always-on voice telepresence, audio surveillance and all sorts of other voice applications which really are not calls, as such. Numerous other voice communication modes are evolving, especially those linked to social and messaging applications.

In a nutshell, we no longer need to shoehorn all of our “distant voice” communications needs into the unnatural format of a “phone call”. We are able to visualise, contextualise, obfuscate, interrupt, lie, drop in and out, waffle, multi-task, spy, listen, store, mumble, overhear, translate, declaim, announce and recall speech over a network in many, many different ways.

Not only that, but the supply of basic “phone calling” functionality has grown much faster than demand. If we do want to make a traditional A-B for X minutes call, we have many modern variants on the theme of a “piece of wire and switch”, now over mobile networks as well as fixed lines. It’s not that hard to do. Sure, numbering is a constraint, and ultimate quality may be a limit – but that is quality measured against the yardstick of the “telephony application”, and not a more general measurement of social communications. We don’t really complain about the QoS of speech in a noisy pub – or pay extra for a quieter venue.

Will LTE voice be “old telephony” again… or something new?
But the final kicker is the imminence of a major transition point – the adoption of LTE and all-IP mobile networks which are not yet optimised for telephony. Although various initiatives – notably the GSMA’s VoLTE (Voice over LTE) specifications – are developing carrier-grade LTE telephony, the likelihood is that it will take several years to get to the quality, reliability, scalability and cost/power performance of today’s basic GSM. 4G networks have not really been designed with voice in mind – or viewed more cynically, it has always been “someone else’s problem” to solve.

Nobody yet knows what happens when we have 1,000 mobile VoIP users in a cell, moving around, handing off to other cells, causing interference, audio glitches and so forth. Experience from fixed VoIP suggests that tuning networks to mass-market perfection takes a very long time, and it seems unlikely that the extra variables of RF and mobility will make the task easier.

This implies that smartphones on LTE networks – and, by extension, 3G networks as well – risk creating a vacuum, which could well be filled by other “non-telephony” voice applications, while we wait for “plain vanilla mobile calling” to catch up to the realities of wireless IP. The telecoms standards and market representation bodies (3GPP, GSMA and others) have made little effort to diversify efforts into the more generic “distant sound” world, instead focusing on replicating what we have today. Much-trumpeted enhancements such as “HD” (high-definition) codecs go only a tiny distance towards the more complex human-interaction models discussed above.

There is an argument that plain-old telephony (fixed or mobile) can be packaged up and “distributed” through various new “delivery” channels. Linked to the Web and appropriate call-control APIs, many operators are hoping to create new “cloud communications” platforms. But it is unclear whether the underlying telephony control mechanisms and the “session philosophy” of calling really represent the best possible basic ingredient. Add in the usual rigid telco attitudes towards numbering, security, pricing and specific acoustic mechanisms and it seems unlikely that telco-powered telephony will be the best way of creating all of the new “distant voice” applications that will emerge.

Filling the voice innovation gap
What will fill the gap, becoming the platform(s) of choice for the plethora of innovative voice apps and services? It is still too early to tell. It could be some of the larger VoIP incumbents such as Skype or Google, or an established software-client provider like CounterPath. But it could also be one of the new breed of speech-centric application developers such as Viber, Vivox or RebelVox. Major carrier-voice infrastructure vendors such as Cisco, Sonus, Acme Packet and Broadsoft also have roles to play, with some attempting to become more open platforms – although with an eye to their traditional operator customer base.

From a handset standpoint, things are likely to get quite complex. Ordinary phone calls are not going to disappear – but we will start to see multiple voice applications present on each device. This is already happening with Skype and GVoice apps, but looking further ahead, more fragmentation is probable. This will present huge challenges for UI and “contact” applications, as well as a debate about which voice and audio/acoustic components are best installed in the OS, on the baseband or apps processors, in individual apps or even in dedicated audio chips.

One thing is certain however; making a clear and careful distinction between “voice” and “telephony” is a critical starting point for understanding the landscape. Telephony is what telcos do today. It’s a closely-defined service, subject to rules and regulations, and billed in a structured way. But increasingly, general voice applications will go beyond homogeneous “calls” – for example, chat between players of an online game. This will require new business models, new platforms, and new forms of user interaction. How the traditional telephony industry deals with these new voice innovations will be fascinating to watch.

– Dean

Dean Bubley is the founder of research company Disruptive Analysis. He is currently developing a programme of “Future of Voice” master-classes together with communications industry visionary Martin Geddes. Dean can be reached at AT disruptive-analysis DOT com.

[Infographic] Top 5 Handset OEMs 2001-2010

In the past 10 years, the handset OEMs landscape has changed dramatically.

Companies that seemed unshakable have lost ground and are gradually being replaced by new and agile contenders, borne from the PC industry. The ‘old OEM guard’ is still being driven by momentum, but as one-by-one these giants fall and smartphone adoption continues to accelerate, the battle for a spot in the top 5 leaderboard is getting more and more heated.

How has the landscape changed, you ask? Well, just take a look at our latest infographic:

Top 5 handset OEM

Feel free to copy the infographic and embed it in your website.

600 pixels wide version

760 pixels wide version

1000 pixels wide version

[sociable_code]

Enter the Cloud Phone

[With the adoption of SaaS applications, augmented reality, visual recognition and other next-gen phone apps, the smartphone processing model is looking for help from the Cloud. Guest author Vish Nandlall introduces the concept of the Cloud Phone and the technology advances that can make this happen]

Are smartphones converging with laptops ? While smartphones enable a rich user experience, there exists an order of magnitude gap in memory, compute power, screen real-estate and battery life relative to the laptop or desktop environment (see table below). This disparity renders the whole question of smartphones vs laptops an apple vs oranges debate. It also begs the question: can the smartphone ever bridge the gap to the laptop?

Smatphones Laptops
Apple iPhone 4 HTC EVO 4G ASUS G73Jh-A2 Dell Precision M6500
CPU Apple A4 @ ~800MHz Qualcomm Scorpion @ 1GHz Intel Core i7-720QM
@ 2.80GHz
Intel Core i7-920XM @ 2.0GHz
GPU PowerVR SGX 535 Adreno 200 N/A N/A
RAM 512MB LPDDR1 (?) 512MB LPDDR1 4x2GB DDR3-1333 4x2GB DDR3-1600
Battery Integrated 5.254Whr Removable 5.5Whr 75Whr 90Wh

Source: vendor websites

As a matter of physics, the mobile and nomadic/tethered platform will always be separated along the silicon power curve – largely driven by physical dimensions. The laptop form factor will simply be able to cool a higher horsepower processor, host a larger screen real-estate and house a larger battery and memory system than a smartphone.

Does a smartphone need to be laptop ?
Yes it does…or, at least, it soon will. The low-power constraints of mobile devices have been the official Apple argument behind the recent Apple-Adobe feud – and Apple’s acquisition of PA Semi is a further testament to the importance of the hardware optimization in mobile devices.

The processing envelope for mobile applications is becoming stretched by the demands of next-generation mobile applications; always-on synchronization of contacts, documents, activities and relationships bound to my time and space; the adoption of Augmented Reality applications by mainstream service providers that pushes AR into a primary ‘window’ of the phone; advanced gesture systems as MIT’s “sixth sense” that combine gesture based interfaces with pattern recognition and projection technology; voice recognition and visual recognition of faces or environments that makes mobile phones an even more intuitive and indispensible remote control of our daily lives.  All these applications require the combination of a smartphone “front-end” and a laptop “back-end” to realise – not to mention having to run multiple applications in parallel.

The appearance of these next-gen applications will also create greater responsibilities for the mobile application platform: it is now important to monitor memory leaks and stray processes sucking up power, to detect, isolate and resolve malicious intrusions and private data disclosure, and to manage applications which require high-volume data.

So we come back to the question, is there a way to “leapfrog” the compute and memory divide between tethered and mobile devices? The answer, it turns out, may lie in the clouds.

Enter the Cloud Phone
The concept of a Cloud Phone has been discussed oftentimes, most recently being the topic of research papers by Intel labs and NTT DoCoMo technical review.

The concept behind the Cloud Phone is to seamlessly off-load execution from a smartphone to a “cloud” processing cluster. The trick is to avoid having to rewrite all the existing applications to provide this offload capability. This is achieved through creating a virtual instance of the smartphone in the cloud.

The following diagram shows basic concept in a nutshell (source: NTT DoCoMo technical review)

The Cloud Phone technology has been brought back in vogue is due to advancements in four key areas:

  1. Lower cost processing power; Compute resources today are abundant, and data centers have mainstreamed technologies for replicating and migrating execution between and within connected server clusters.
  2. Robust technologies for check-pointing and migrating applications; Technologies such as live virtual machine migration and incremental checkpointing have emerged from the classrooms and into production networks.
  3. Reduced over-the-air latency; the mobile radio interface presents a challenge in terms of transaction latency. Check-pointing and migration requires latencies on the order of 50-80ms – these round trip times can be achieved through current HSPA, but will become more realistic in next-generation LTE systems. Average latencies in a “flat” LTE network are approximately 50ms at the gateway, which suddenly makes the prospect of hosting the smartphone application on a carrier-operated “cloud” very much a reality. Note that past the gateway, or beyond the carrier network, latencies become much more unmanageable and will easily reach 120ms or more.
  4. Mobile Virtualization; this technology offers the ability to decouple the mobile OS and application from the processor and memory architecture, enabling applications and services to be run on “cloud” servers. This has become an area of intensive research in mobile device design, and was covered in an earlier article by OK Lab’s Steve Subar.

A cloud execution engine could provide off-loading of smartphone tasks, such as visual recognition, voice recognition, Augmented Reality and pattern recognition applications, effectively overcoming the smartphone hardware and power limitations. This model would also allow key maintenance functions requiring CPU intensive scans to be executed on a virtual smartphone “mirror image” in the cloud. This would also facilitate taint checking and data leak prevention which have been long used in the PC domain to increase system robustness.

Another consequence of the Cloud Phone model is that it provides a new “value-add point” for the carrier in the mobile application ecosystem. The low latency limitations will require optimizations at the radio-access network layer implying that the network carrier is best positioned to extract value from the Cloud Phone concept – plus operators can place data centres close to the wireless edge allowing very low latency applications to be realized. This doesn’t rule out a Google entering into the fray – indeed, their acquisition of Agnilux may well signal a strategy to build a proprietary server processor to host such Cloud Phone applications.

The raw ingredients for the Cloud Phone are falling into place; more users are driven towards SaaS based phone applications, and HTML5 is being adopted by handset OEMs. There is no shortage of applications waiting to exploit a cloud phone platform: in July alone, 54 augmented reality apps were added to the Apple App Store. Google has also broken ground in the Cloud Phone space with Cloud to Device Messaging which helps developers channel data from the cloud to their applications on Android devices.

What other Cloud Phone applications do you see on the horizon? When do you see Cloud Phones reaching the market?

– Vish

[Vish Nandlall is CTO in the North American market for Ericsson, and has been working in the telecoms  industry for the past 18 years. He was previously CTO for Nortel’s Carrier Networks division overseeing standards and architecture across mobile and wireline product lines. You can read his blog at www.theinvisibleinternet.net]

The many faces of Android fragmentation

[Android fragmentation is only getting started. Research Director Andreas Constantinou breaks down the 3 dimensions of Android fragmentation and argues how Android will become a victim of its own success]
The article is also available in Chinese.

There’s been plenty of talk of Android fragmentation, but little analysis of its meaning and impacts.

As far as definitions go, the best way to look at fragmentation is not from an API viewpoint, but from an application viewpoint; if you take the top-10,000  (free and paid) apps on Android, how many of these run on all the Android-powered phones?

For Google’s Android team, fragmentation is what keeps them up at night. Fragmentation reduces the addressable market of applications, increases the cost of development and could ultimately break the developer story around Android as we ‘ll see.

Google’s CTS (compatibility test spec) is predicated on ensuring that Market apps run on every Android phone. Android handsets have to pass CTS in order to get access to private codelines, the Market or the Android trademark as we covered in our earlier analysis of Google’s 8 control points – and yes, Google controls what partners do with Android, contrary to the Engadget story.

The 3 dimensions of Android fragmentation
Many observers would point to fragmentation arising as a result of the open source (APL2) license attached to the Android public source code. Reality however is much more complex. There are 3 dimensions of Android fragmentation:

1. Codebase fragmentation. Very few companies have taken the approach of forking the public Android codebase, as permitted under the APL2 license; Google innovates so fast (5 major versions in 12 months) that once you fork, the costs of keeping up-to-date with Google’s tip-of-tree are increasing prohibitively over time (Nokia found out the hard way by forking WebKit and then regretting it).

The main fork of the Android codebase is by China Mobile (the world’s biggest operator with over 500M subscribers) who has outsourced Android development to software company Borqs. China Mobile cares less about keeping up-to-date with the latest Android features as the China market operates as an island where cheap, fake (Shanzai) handsets are predominant. Mediatek, a leading vendor of chipsets shipping in 200-300 million handsets per year plans to make Android available, which could mean another major fork. Cyanogen and GeeksPhone also fork the Android public codeline, but they are designed for a niche of tech-savvy Android fans.

2. Release fragmentation. Google has released 5 major updates to Android in 12 months (1.5, 1.6, 2.0, 2.1 and recently 2.2), all of which introduce major features and often API breaks. You may notice how accessing the Android Market from a 1.6 versus a 2.1 handset gives you a different set of apps. So much for forward compatibility. AndroidFragmentation.com (a community project) has documented several cases of release fragmentation arising from releases which break APIs (e.g. 2.0 SDK breaks older contact apps) or from inconsistent OEM implementations (e.g. receiving multicast messages over WiFi is disabled for most HTC devices).

Release fragmentation is the victim of Google’s own speed of innovation – and Andy Rubin has hinted there’s more major releases coming out in the next 6 months. It’s clearly a sign of how young, agile Internet companies know how to develop software much better that companies with a mobile legacy; major Symbian versions take 12-18 months to release.

Release fragmentation is particularly acute due to the lack limited availability of an automatic update mechanism much like that found on the iPhone. We call the phenomenon ‘runtime aging’ and it is directly responsible for increasing the cost of developing applications. Tier-1 network operators see handsets in their installed base with browsers which are 1-6 years old – that’s how hairy it can get for mobile content (and software) development companies. [update: we understand that certain Android handsets come with a firmware update (FOTA) solution available from Google and other FOTA vendors, but it is installed reactively (i.e. to avoid handset recalls) rather than proactively (i.e. to update all handsets to the latest OS flavour)].

Google itself reports that the Android installed base is split between devices running 1.5, 1.6 and 2.1 versions (or at least for those devices accessing the Android Market). The detailed breakdown as of mid May 2010 is as follows:

Release fragmentation is also arises out of Google’s elitist treatment of its OEM partners. Google will pick and choose which private codeline is available to which OEM based on commercial criteria (contrary to Michael Gartenberg’s story). Take for example how Sony Ericsson’s X10 (running on Android 1.6) came to market after the Nexus One (running on Android 2.1). Ironically, both handsets were made by HTC. [correction: the X10 was developed by Sony Ericsson Japan]

3. Profile fragmentation. Android was designed for volume smartphones. But it arrived at an opportune time – just after the iPhone launch and just as consumer electronics manufacturers were looking at how to develop connected devices. This resulted in two effects that Google hadn’t planned for:

– Android was taken up by all tier-1 (and many tier-2) operators/carriers hoping to develop iPhone-like devices at cheaper prices (i.e. lower subsidies) and greater differentiation. That meant that while operators funded Android’s adolescent years (2008-2010), they niched Android handsets to high-end features and smartphone price points.

– Android is now being taken up by 10s of consumer electronics manufacturers, from car displays and set-top boxes to tablets, DECT phones and picture frames. The Archos internet tablet was just the beginning. Each of these devices has very different requirements and therefore results in different platform profiles.

The timing of Android’s entry into the market has therefore resulted in two implications related to fragmentation.

Firstly, Android’s official codebase isn’t suited for mass-market handsets (think ARM9 or ARM11, 200-500MHz). To get to really large volumes (100M+ annually), Google will need to sanction a second Android profile for mass-market devices. This is a Catch-22, as a second profile is needed to hit large volumes, but it would also break the Android developer story.

Secondly, every new platform profile designed for different form factors (in-car, set-top box, tablet, etc) will create API variations that will be hard to manage. That’s one of the key reasons behind the Google TV initiative and the Open Embedded Software Foundation. However even Google can’t move fast enough to coordinate (manage?) the 10s of use cases and form factors emerging for Android.

All in all, Android fragmentation is going to get far worse, as Android becomes a victim of its own success.But hey, would you expect to have a single app (and a single codebase) that runs on your TV, phone and car?

And there the opportunity lies for tools vendors to provide app porting tools, compatibility test tools and SDKs to help bridge the gap across the eventual jungle of Android fragmentation. And for those looking to better understand the Android commercials we offer a half-day training course on the commercial dynamics behind Android.

What do readers think? Do you have any fragmentation stories to share?

– Andreas
you should follow me on twitter: @andreascon

Palm: $1.2B Down the Shredder

[The acquisition by HP will not save Palm. Guest author Michael Valukenko explains why the sum of Palm and HP is close to zero]

As an old-time Palm user, I was always secretly hoping for resurgence of this familiar and trusted company. At a rational level however, I didn’t believe that the new Palm stands a chance in rapidly changing smartphone market. See my earlier analysis in Who can save Palm here at the VisionMobile blog.

HP’s acquisition makes Palm part of large and financially solid company, but doesn’t compensate for its other weaknesses. Smartphone competition today boils down to competition of service platforms with Apple and Google leading the way. Considering the realities of today’s smartphone market, there are very few real synergies between HP and Palm.

The three missing synergies

Today people don’t buy smartphones for their hardware, but for what they can do with them. This largely means software platform and services built around the phone. Both Apple and Google excel in this area, albeit using very different approaches.

Palm’s WebOS offers a slick UI and a promise of simplified app development by fully adopting the web paradigm. But it lacks a clear differentiation (a killer use case) and an ecosystem unlocking the device into hundreds or thousands different things people could do with it. Let’s face it: It wasn’t that WebOS devices didn’t sell well because Palm lacked marketing dollars. They didn’t sell because they weren’t good enough compared to competition. HP marketing money and distribution muscle won’t save the day.

Today’s leaders – iPhone, Blackberry and Android – all have clear differentiation: iPhone is all about entertainment and Internet and is backed by large iTunes user base. Blackberry sells mobile email and is backed by corporate IT adoption and a strong distribution network. Android seamlessly integrates with Google services promising free and open Internet. The vague notion of “HP Experience” looks pretty pale in comparison.

Critically important, app developers and Internet companies already have their hands full with iPhone, iPad, Blackberry, Android, not to mention the upcoming Windows Phone 7. What does HP have to offer in exchange for some mind-share? Any bright ideas?

Last, but definitely not least. Mobile operators/carriers take on the lion’s share of smartphone promotion and subsidy costs, hoping to attract new subscribers and increase ARPU of existing ones. What can HP/Palm offer to convince operators to take marketing and subsidy dollars from iPhone, Blackberry and Android, and put them into HP/Palm?  I don’t see much. Do you?

Clear differentiation, developer mindshare and operator subsidies  are all critical today for the success of a smartphone platform. All these were and remain Palm’s weaknesses regardless of its financial situation. HP does not complement Palm in any of these critical areas.

Chasing the Apple dream
A quick glance at HP earnings breakdown reveals HP as an electronics equipment company at its core. The company generates most of it revenues from selling printers, laptops, desktop PC and servers. Smartphone unit sales are catching up to laptop sales, while laptop margins are getting thinner and thinner. It is easy to see how tempting would it be for HP management to try to emulate Apple’s model of selling high-margin devices.

However Apple owes much of its success to its vertical integration, which allows blending hardware, software and services into iconic products. This vertical integration is ideally suited for breaking new grounds and creating new product categories. It is critical factor in Apple’s ability to create such products as Apple Lisa, iPod, iPhone and iPad.

As explained by Clayton Christensen in this seminal paper, vertical integration is an advantage in emerging product categories, where it helps to overcome technical challenges. Vertical integration however becomes a disadvantage in maturing markets, where flexibility, customization and modularity are of greater importance.

It is difficult to see HP successfully reproducing Apple’s model. The opportunity to be the first with iPhone-like product does no longer exist. 

Is this good news?
The deal doesn’t look particularly bright for HP shareholders. But may be in the broad scheme of things the deal is great news for many other people: Investment bankers will pocket multi-million dollar commissions, Palm’s investors and management will be spared from their misery, HP executives will boost their ego, business newspapers will sell some ads, and bloggers (including myself) will have something to write about.

How do you think the acquisition will shape up for Palm and HP?

– Michael

[Michael Vakulenko has been working in the mobile industry for over 16 years starting his career in wireless in Qualcomm. Throughout his career he gained broad experience in many aspects of mobile technologies including handset software, mobile services, network infrastructure and wireless system engineering. Today Michael consults to established companies, start-ups and operators. He can be reached at michaelv [/at/] WaveCompass.com]

Adobe defends its mobile strategy

[Is Adobe’s mobile strategy doomed? Mark Doherty guest author and Platform Evangelist for Mobile and Devices at Adobe responds to the recent criticism and argues that the best is yet to come]

The Big Picture
Adobe’s vision – to revolutionize how the world engages with ideas and information – is as old as Adobe itself, in fact 28 years ago the company was founded on technologies like PostScript and later PDF that enabled the birth of desktop publishing across platforms.

Today Flash is used for the 70% of online gaming and 75% of video; driving innovation on the web for over a decade. Flash Player’s decade long growth can be attributed to three factors:

  1. Adobe customers such as BBC, Disney, EPIX, NBC, SAP and Morgan Stanley can create the most expressive web and desktop applications using industry leading tools.
  2. The Flash Player enables unparalleled cross platform consistency, distribution and media delivery for consumers on the desktop (and increasingly on mobile)
  3. A huge creative community of designers, developers, illustrators are involved in defining Flash, and hence driving the web forward.

Now, as consumers diversify their access to the web they are demanding the same experiences irrespective of the device.  Content providers and OEMs across industries recognize this trend and are delivering Flash Player and AIR as complimentary web technologies to extend their vertical propositions.  The process of actually delivering this is not trivial, and was made more complex by a failing global economy, but we are on schedule and the customer always wins.

Where we ‘ve been
The success of Flash on mobile phones has been second to only Java in terms of market penetration, but second to none in terms of consistency.  According to Strategy Analytics, Flash has been shipped on over 1.2 Billion devices, making it the most consistent platform available on any device.

Adobe announced in 2008 a new strategy for reseeding the market with a standardised Flash single runtime, creating the Open Screen Project, an alliance of mobile industry partners to help push this new vision.  So why the change of plan?

In the historically closed, or “wild west” that is the mobile ecosystem, web content providers and developers have found it too difficult to reach mobile devices. In practical terms, it was too difficult for the global Flash community to reach consumers, and to do that in a manner consistent with the consumer reach of desktop content.  Japan has been the most successful region because of deep involvement from NTT DoCoMo and Softbank, and by enabling the use of consistent web distribution.

That said, agencies such as Smashing Ideas, ustwo and CELL (sorry to those I’m missing out) have established valuable businesses in this space by building strong partnerships with OEMs.

On the top end of this success scale, Forbes recently announced Yoshikazu Tanaka has become the first Flash Billionaire with the incredibly successful Flash Lite games portal Gree in Japan.  (Gree is a “web service”, not desktop or mobile, and is indicative of what can be achieved using Flash as a purely horizontal technology across devices)

In all, our distribution and scaling plans worked very well for Adobe, but outside Japan the mobile “walled gardens”, and the web on devices today, didn’t work for our customers.  The cost of doing business with multiple carriers in North America and Europe and the lack of web distribution to a common runtime left our customers with few choices. It was time for a new plan.

Open Screen Project
Delivering on the Open Screen Project vision at global scale with 70 partners is a huge task; it was always going to take about two years.  We are very much on schedule with Flash Player 10.1 and AIR, although eager to see it rollout.

However, describing the goals of the Open Screen Project in terms of dates, forecast market share, Apple’s phone or their upcoming tablet, specific chipsets or Nokia hardware is to miss the whole point.  The Open Screen Project is not a “mobile” solution; it’s about the global content ecosystem.

In summary – connecting millions of our developers and designers with consumers via a mix of marketplaces and the open web.

Google and Microsoft are great examples of companies that have competitive technologies and services, but both companies still use Flash today to reach consumers.  Google use Flash for Maps, Finance and youtube, and Microsoft for MSN Video and advertising.  So indeed we have a co-opetition between Silverlight and Flash, or Omniture and Google Analytics, but together our goal is to enable consumers to browse more of the web on Android, Windows Phone and other devices in the future.

Today, over 170 major content providers (including Google) are working with us right now to optimize their HTML and Flash applications for these mobile devices.  In the coming months we’ll begin the long roll out process, updating firmware, enabling Flash Player downloads on OEM marketplaces.  We’re projecting that by 2012, 53% of smartphones will have Flash Player installed.

It’s really exciting to see it coming together and so many big names involved, why not have a peek behind the curtain?

Flex Mobile Framework
To enable the creation of cross-platform applications even simpler Adobe is working on the Flex Mobile Framework. Essentially we have taken all the best elements of the open source Flex 4 framework and optimized it for mobile phones.

Using the framework and components you will be able to create applications that can automatically adapt to orientation and layout correctly on different screens. The most important addition is that the Flex Mobile Framework “understands” different UI paradigms across platforms. For example, the iPhone doesn’t have a hard back button and so the Navigation bar component will present a soft back button on that platform.

In terms of developer workflow we expect that all background logic of applications will run unchanged.  User interfaces and high-bitrate video will need some adjustments for some hardware, though most changes will be basic changes like bigger buttons, higher compression videos and to adapt HTML for mobile browsers.

Over time with the Flex Mobile Framework, our goal is to enable our customers to create their applications within a single code base, applying some tweaks for each platform for things like Lists, Buttons or transitions.  In this sense we can expect to enable the creation of applications and experiences that are mobile centric, and yet cost effective by avoiding fragmented solutions where appropriate.

We are aiming to show the Flex Mobile Framework later in the year, and I’d love to see it supported in Catalyst in the future.

The Year Ahead
Throughout 2010 we will see Flash Player 10.1 on Palm’s WebOS, Android 2.x, with Symbian OS and Windows Phone 7 coming in the future. In addition to that we also have plans to bring Flash Player 10.1 to Blackberry devices, netbooks, tablets and of course the desktop. For less powerful feature phones we’ve got Flash Lite, and all of these platforms will demonstrate Flash living happily with HTML5 where it’s available.

Adobe AIR 2 is also in beta right now, enabling users to create cross-platform applications that live outside the browser on Windows, Mac and Linux computers. AIR is of course mobile ready, and later in the year we’ll be bringing AIR to Android phones, netbooks and tablets. On top of that, you will also be able to repackage your AIR applications for the iPhone with Flash Professional CS5 very soon.

The rollout and scale of Flash Player and AIR distribution over time are now inevitable, and largely committed over a year ago.

There are risks of course; these ecosystems are moving targets just like they have always been.  However, I’m extremely confident that we can build upon our previous successes, learn from our mistakes and innovate faster than any of our competitors.

– Mark Doherty
Platform Evangelist for Mobile and Devices at Adobe

Demolition Derby in Devices: The roller-coaster ride is on

[The economic realities will lead to a roller-coaster ride that will shake up the mobile industry. Guest blogger Richard Kramer talks about the impending price war, the implications for industry growth, and how this will alter the landscape of device vendors in the next decade]

With all the discussion of technology trends on the blogosphere, there are some harsh economic realities creeping up on the handset space. The collective efforts of vendors to deliver great products will lead to an all-out smash-up for market share, bringing steep declines in pricing.

In November 2009 I wrote a note about what Arete saw as the impending dynamics of the mobile device market. I called it Demolition Derby. This followed on from a piece called Clash of the Titans, about how the PC and Handset worlds were colliding, brought together by common software platforms and adopting common chipset architectures. As handsets morphed into connected devices, it opened the door for computing industry players, now flooding in.

New categories of non-phone devices
A USB modem/datacard market of 70m units in 2009 should counted as an extra third of the smartphone market, as it connected a range of computing devices. By the end of 2010, I believe there will be many new categories of non-phone mobile devices to track (datacards, embedded PCs, tablets, etc.), and they may be equal to high-end smartphone market in units in 2011.  Having looked at the roadmaps of nearly every established and wannabe vendor in the mobile device space, I cannot recall a period in the past 15 years of covering the device market with so many credible vendors, most with their best product portfolios ever, tossing their hats in the ring.  I see three things happening because of this:

 

1. First, a brutal price war is coming. This will affect nearly every segment of the mobile device market. Anyone who thinks they are insulated from this price war is simply deluded. I have lost count of the number of vendors planning to offer a touch-screen slim mono-bloc Android device for H2 2010. The only thing that will set all these devices apart will be brand, and in the end, price.  Chipmakers – the canaries in the handset coal mine – are already talking about slim HSPA modems at $10 price points, and $20 combined application processors and RF. Both Huawei and ZTE now targeting Top Three positions in devices, with deep engagements developing operator brands. They are already #1 and #2 in USB modems.  Just look at the pricing trends ZTE and Huawei brought to the infrastructure market; this will come to mobile devices.

2. Second, growth will rebound with a vengeance. I expect 15% volume growth in 2010, well ahead of the cautious consensus of 8%.  I first noted this failure of vision in forecasting in a 2005 note entitled “A Billion Handsets in 2007” when the consensus was looking for 6% growth whereas we got 20%+ growth for three years, thanks to the onset of $25 BoM devices. Consumers will not care about software platform debates or feature creep packing devices with GHz processors in 2010. Ask your friends who don’t read mobile blogs and aren’t hung up about AppStores or tear-downs:  they will simply respond to an impossibly wide choice of impossibly great devices, offered to them at impossibly cheap prices.

3. Third, the detente is over. The long-term stability that alllowed the top five vendors to command 80% market share for most of this decade is breaking down.  This is not simply a question of “Motorola fades, Samsung steps in” or “LG replaces SonyEricsson in the featurephone space”.  Within a year, there could be dangerously steep market share declines among the former market leaders (i.e. Nokia) to accompany their decline in value share. Operators are grasping control of the handset value chain; many intend to follow the lead of Vodafone 360 to develop their own range of mid-tier and low-end devices. Whether or not this delivers better user experiences, operators are determined to target their subsidy spend to their favourite ODM partners. In developed markets, long-established vendors are getting eclipsed: in 2010, RIM or Apple could pass traditional vendors like SonyEricsson or Motorola in units. RIM and Apple already handily out-paced older rivals in sales value, and with $41bn of estimated sales in 2010, are on par with Nokia.

Hyper competition
So where does this lead us? Even with far greater volumes than anyone dares to imagine, there is no way to satisfy everyone’s hopes of share gains, or profits. With Apple driving to $25bn in 2010 sales and Mediatek-based customers seeking share in emerging markets, the mobile device market is entering a phase of hyper-competition. It is all too easy for industry pundits to forget that Motorola and Sony Ericsson collectively lost over $5bn in the past 2.5 years. More such losses are to come.

Never before have we seen so many vendors acting individually rationally, but collectively insane. Albert Einstein once famously said that “the defintiion of insanity was doing the same thing over and over but expecting a different result”.

The men in the white coats will have a field day with the mobile device market in 2010.

– Richard

[After four years as the #1 rated technology analyst in Europe, Richard Kramer left Goldman Sachs in 2000 to form an independent global technology research group. Arete has 10 years experience dissecting the financials and industry trends in  semis, software, devices and telecom operators, out of offices in London, Boston, New York and Hong Kong. Richard can be reached at richard [dot] kramer [at] arete.net]

2010 in review: Under-the-radar trends at Mobile World Congress

[Following a week of frantic announcements and marketing hype at MWC 2010, VisionMobile’s Research Director, Andreas Constantinou looks at what really matters – the under-the-radar trends that will make the biggest impact in the next two years]


The annual Mobile World Congress, besides a circus frenzy of 49,000 people has also traditionally been a barometer of mobile industry trends. This year we look at the under-the-radar trends that may have gone unnoticed, but will make a major impact during 2010-11.

1. Building developer bridges
If there was a theme to this year’s Mobile World Congress it was Developers. This year’s App Planet show-in-a-show gathered 20,000 visitors, making the stands of LTE vendors and the CBoss showgirls look pale in comparison.

Imagine that. After years and years of efforts in ‘pushing’ the next-gen killer technology (on-device portals, Mobile TV, widgets, ..), the mobile industry is finally seeking inspiration beyond its own confines; at the software developers that will generate even more ‘apps for that’ and drive innovation that will actually pay for the bandwidth investments.

The race is on to grab the best mobile developers – and the mobile industry is spending big money on it. This year’s sponsors of mobile developer contests and events are not just platform providers or handset OEMs. Just look at the some of the sponsors of the WIP Jam developer event at MWC: Qualcomm, Alcatel Lucent, Ericsson, NAVTEQ, O2 Litmus, Oracle.

Developer mindshare is expensive as developers have to be attracted away from other platforms which they have invested in; and as such we would argue that the average DAC (developer acquisition cost) is much higher than the average SAC (subscriber acquisition cost). Thankfully there are plenty of marketing budgets to throw into the challenge. Palm is spending $1 million to build its own developer community in a dire effort to win back its once-thriving community of mobile developers.

It’s ironic given that it only took the mobile industry 20 years to learn what the software industry understood since the early 1990s; that the smartest people work for someone else, but they will gladly work for your platform if you give them the right tools and audience. And it’s most appropriate that this realisation is happening right now, as the two industries are coming together in the post-iPhone era.

One of the big announcements at this year’s MWC was the Wholesale Application Community (WAC), the new operator collaborative effort at connecting to developers. WAC is born out of the merge of two initiatives: OMTP’s BONDI (device API specs for securely accessing user information on the device) and the Joint Innovation Lab, JIL (which besides the hype has had delivered only a widget spec). WAC is an intent of operator collaboration, but one which yet needs to decide what it will be delivering.

The GSMA App Planet, WIP Jam, WAC and many other initiatives are trying to capitalise on one of the hottest, yet perhaps understated trends of 2010: building commercial bridges or matchmaking platforms between software developers and the mobile industry. Next question: what’s your platform’s DAC (developer acquisition cost)?

[shameless plug: at VisionMobile, we ‘re running the biggest mobile developer survey to date, spanning 400+ developers, 8 platforms and 35+ data points across the entire developer journey. Best of all, the results will be freely published thanks to the sponsorship by O2 Litmus]

2. Quantum leap in mobile devices
Industry pundits have been overoptimistic about the dominance of smartphones, time and time again.; but contrary to predictions, the smartphone market share has remained at circa 15-17% of sales as phone manufacturers have remained risk averse; Instead of porting high-cost, high-risk operating systems like Symbian and Windows Mobile on mass market phones, OEMs have preferred to patch their legacy low-risk RTOS platforms with high-end features (read touchscreen, widgets and the like) – see earlier analysis here.

Yet the mobile software map is about to change rather abruptly; not because of Android, but as chipset vendors make the leap to sub-40nm manufacturing. Chip cost plays a major role in handset BOM (bill of materials) and that cost is directly proportional to the surface area of the silicon (excluding royalty payments). With the move to sub-40 nm manufacturing processes, you can fit a GPU (graphical processing unit) and even ARM Cortex architectures within the same die size. This means that the smartphone BOM will reduce from $200 to $100 in only 2 years, based on our sources at chipset vendors – and implies that MeeGo, Symbian, Windows Mobile and Android can penetrate into a far large addressable market than was possible before.

Adobe is banking on this very trend, planning (hoping?) that Flash penetration will reach 50% of smartphones by 2010, or circa 150M devices sold per year. Similarly, Nokia sees revenue contributions from S40 handsets dwindle from around 55% in 2009 to 35% in 2011, replaced by MeeGo (circa 10%) and Symbian (circa 55%) – see slide from Nokia’s Industry Analyst event. This also goes to show Nokia’s continuing investment in Symbian, at a time when the future of the Symbian Foundation is shady.

Virtualisation technology is further accelerating the BOM reduction, by allowing the likes of Android and Symbian OSes to sit on the same CPU as the modem stack. OK Labs introduced off-the-shelf reference designs for virtualised Android and Symbian earler in 2009, while at MWC 2010 Virtualogix announced similar deals with ST Ericsson and Infineon. The third (and last!) virtualisation vendor, VMWare (who acquired Trango), is yet to make a similar move.

Last but not least, we are seeing new attempts at re-architecting low-cost smartphone software. Qualcomm is making a comeback with its BREW MP software positioning this as a feature-phone operating system and getting major commitments by AT&T. Kvaleberg (a little-known Norwegian engineering company) has productised its 10-years of feature phone integration know-how into Mimiria, a feature phone OS with a clean-room UI architecture that makes variant creation a swift job requiring only 2-3 engineers to customise. Myriad has announced an accelerated Dalvik implementation to speed up Android apps up to 3x, allowing those to run more comfortably in mass market designs.

3. Analytics everywhere
Another under-the-radar trend at MWC 2010 was analytics, which was making inroads into the feature set of products across the spectrum – from SIM cards and devices to network infrastructure solutions.

Application analytics is the only visible tip of of the iceberg for now, with analytics services available from Adobe, Apprupt, Bango, Distimo, Flurry (merged with PinchMedia), Localytics, Medialets, Mobclix and Motally. There is also plenty of innovation to be had here, with a startup (still in stealth mode) delivering design-time analytics on the type of applications and their use cases. Or another startup which is delivering personal TV program management, and monetising (among others) on the analytics on what TV programs users are watching, searching and sharing.

Moreover, analytics is slowly penetrating into operator networks for delivering smarter campaign management, subscriber analysis or network performance. There is a long list of vendor solutions here from Agilent, Airsage, Aito, CarrierIQ, Rewss, Umber Systems, Velocentm Wadaro and xTract among others. One related under-the-radar announcement was that from SIM manufacturer Giesecke & Devrient (G&D) who is launching a product for measuring network quality on the handset.

Taking analytic to the next level, the GSMA and comScore recently launched the Mobile Media Metrics product. This is the first census-level analytics product for measuring ad consumption and performance, starting with the UK market, which follows the lucrative business model of TV metrics.

Analytics is indeed the most underhyped trend, whose magnitude the industry will only realise in 5-10 years from now.

4. Mobile identity in the cloud
Cloud storage for personal data is ubiquitous on the Internet; Google Buzz, Facebook and Dropbox are perhaps the epitomy of this trend. The mobile industry has traditionally fallen behind, but is rapidly catching up in 2009-10 with the cloud-stored Windows Mobile UI, the social networking connectivity layer on the idle screen as seen in Microsoft’s One App, the socially-connected handsets from INQ Mobile, HTC and Motorola (Motoblur), and the 10+ solution vendors who offer addressbook syncing solutions (Colibria, Critical Path, Funambol, FusionOne, Gemalto, Miyowa, Newbay and many more).

We used to think of user data as migrating from the SIM card (the operator stronghold) to the handset (the OEM territory). Now the data is once again migrating away from the handset to the cloud, the home-turf of Internet players.

This is the next battlefield, in the landgrab to define the interfaces that determine access to our mobile identity. There are two camps competing here; the Internet players who have defined user data access standards (Google, Facebook and Twitter), versus the players who have defined mobile data access standards to date (network operators – see Vodafone 360 and handset OEMs – see Nokia Ovi).

This is one of the important battles that will determine who can reap the most profits out of user information by controlling the interfaces that connect them to the outside world (for background see Clayton Christensen’s thesis on the relationship between interfaces and profits). And it’s also what network operators should be rushing to standardise right now, in one of the last battles that will determine their smart-pipe vs bit-pipe future.

Comments welcome as always,

– Andreas

MeeGo: Two (M)onkeys don't make a (G)orilla. But they sure make a lot of noise

[What is behind the announcement of Meego operating system by Nokia and Intel? Guest blogger Thucydides Sigs deconstructs what Meego means and its importance to the mobile industry]

How much substance is behind the noise of Nokia’s and Intel’s announcement of Meego? A few points to consider.

Nokia, who feels threatened by Google’s Android and Chrome OS efforts, is putting significant  efforts in order to expand into other device categories and bring its Ovi services to more consumers in more places. So a move that brings Maemo – together with Ovi (and the underlying Web-runtime apps and Qt cross-platform) to Intel chipsets is a straightforward strategic win. It will allow OVI services – such as Maps – to get into non mobile devices, especially Automotive (which has been a strategic focus for Intel) and other connected (but wired – after all power consumption is Intel’s Achilles heel) devices such as home phones.

So is Nokia going to bet it’s future Linux devices on a group of Intel engineers? Nokia is smarter than that: Intel software engineering has never been something to write home about. And Nokia has always been careful in maintaining and winning control over strategic areas. So Nokia will either maintain a parallel internal effort or maintain tight control over the ARM port and the overall MeeGo architecture.

Is MeeGo going to really bring Ovi services & Maemo into the hands of tens of millions more consumers? Well, MeeGo open’s a door, but success will depend on the quality of Maemo and Ovi experience. Maemo v6, due late this year, will be catch-up to where Android and WebOS were half a year ago, and were Apple was a year ago. So it is still one or two years behind the rest of the industry. That said, Maemo does not need to be the best – it needs to be good *enough* for ‘mass market’ consumers, so that combined with Nokia industrial design expertise and marketing power, an “object of desire” can still be delivered.

It’s this consumer “Desire” that brings us to the Ovi Services angle – and the question of how good will Nokia Services offering will be. Studying the NexusOne, it is impressive to see how Google seamlessly connected it’s many service offering – creating a compelling integrated experience. From a photo gallery that is both local and web (Picassa), through Google Voice (low cost calls, transcribed voice messages) and an almost perfect navigation and mapping experience (including turn-by-turn voice instructions and maps). Contacts, Email, Calendaring are the basics that are a must have. And Google is quickly expanding into other services (note the recent Aardvark acquisition and Buzz launch). Yes, MeeGo gives Nokia a vehicle to bring Ovi to some other device segments, but can Ovi compete effectively with Google’s breadth of services?

What about Intel? It has been spending hundreds of millions of dollars on a software strategy which does not seem to show a clear path to recouping the investment. Moblin, has not been able to ship in any significant volumes, is inferior to either ChromeOS or Android from a software platform perspective, and lacks any kind of services offering (which is why they needed Ovi). If Intel thinks that software is another part of it’s vertically integrated stack that will differentiate the chipsets, then it does not make sense to open it up and make it an open industry initiative. If Intel truly believe that Moblin should be open and used by competing ARM chipset vendors, then what does it gain from spending those hundreds of millions of dollars on the effort?

Open Source: ChromeOS, Android and Maemo are creating a very different software ecosystem then the one Intel got used to with Microsoft in the 90s. None of the software players is going to generate significant revenues on the device side. Intel exec’s might  want to re-read Andy Grove book, step outside the box and ask themselves if their software effort still makes sense in the 2010 industry context.

And while Intel is spending time on building this software strategy, the chipset market is experiencing a disruptive change, shifting from computing power (where good enough performance is delivered by both Intel and ARM), to battery power and mobility where ARM is clearly superior.  It might be better for Intel to focus it’s efforts back on it’s chipset technology and fix its power consumption problems, because when it comes to wireless devices (either within the home or outside, anything that is not tethered to a power cord), their offering is inferior to ARM, and no amount of software will be able to cover this gaping hole.

What about the rest of the chipset industry? Would the other ARM chipset vendors, such as TI, Qualcomm, Broadcom and nVidia follow path and join MeeGo? It’s hard to imagine that any of those companies will want to entrust their software strategy in the hands of Intel: not only is Intel a direct competitor, it software skills leave a lot to be desired, and it’s long term commitment to the space (as outlined above) is not clear. Is Nokia’s involvement enough of a carrot to entice those vendors into MeeGo? Having Maemo running on top of MeeGo will make insertion into Nokia easier, but Maemo is open source and there is nothing holding the chipset vendors from porting Maemo to their chips on their own or with the help of other independent 3rd parties. So we suspect Nokia will give it a modest try, but when it comes to purchasing chips, power, performance and cost will still be the over-riding criteria for Nokia.

So, lots of noise that those two monkeys are making, but little impact. MeeGo seems to be cute (qt) and (h)armless, but not a big industry changer.

– Thucydides

[Thucydides Sigs – a pseudonym – has many years of experience juggling computing constraints, mobile software and consumers needs. With that said, imagine listening to a violin sonata not know who the artist is or who composed it. You end up having to listen more carefully in order to make a judgment. He can be reached at thucydides /dot/ sigs [at] gmail [dot] com]