UI Technologies are trendy…but what are they really good for?

[UI development flow and actors: Graphical Designer, Interaction Designer, Software Engineer, classical technologies: GTK, Qt, next generation: Flex, Silverlight, WPF, TAT, XUL, SVG… guest blogger Thomas Menguy describes what are the main concepts behind all the UI technologies, what the new generation ones have in common, what those modern approaches are bringing to the product development flow…and what is missing for the mobile space].

A good UI is nothing without talented graphical designers and interaction designers: How the plethora of new UI technologies are helping unleashing their creativity? What are the main concepts behind those technologies?Let’s try to find out!

UI is trendy… thank you MacOS X, Vista and iPhone!

image

image

image

UIQ

S60

iPhone

Put the designers in the application development driver seat!

Here is a little slide about the actors involved in UI design

image

UI flow actors and their expertize

What does it mean?

Different actors, different knowledge …. So different technologies and different tools!

Those three roles can be clearly separated only if the UI technology allows it. This is clearly not the case in today mainstream UI technologies where the software engineer is in charge of implementing the UI and the service part, most of the time in C/C++ , based on specifications (word document, Photoshop images, sometime adobe flash prototypes), that are subject to interpretation.

  • The technologies used by the designers have nothing in common with the one used to do the actual UI.

  • The technologies that allow UI implementation…require an heavy engineering knowledge.

  • Big consequence: the software engineer decides at the end!

The picture is different for web technologies where it has been crucial and mandatory to keep strongly uncorrelated the service backend from its representation : Web browsers have different API and behavior, backend have to be accessed by many other way than web representation…and above all data is remote and presentation is “half local/half remote”.

Separating representation, interaction and data has been the holly grail of applications and services development for years. It has been formalized through a well known pattern (or even paradigm in that case) : MVC (Model View Controller)

image

MVC pattern / source: wikipedia
From wikipedia: http://en.wikipedia.org/wiki/Model-view-controller
Model
The domain-specific representation of the information on which the application operates. Domain logic adds meaning to raw data (e.g., calculating if today is the user’s birthday, or the totals, taxes, and shipping charges for shopping cart items).
Many applications use a persistent storage mechanism (such as a database) to store data. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the Model.
View
Renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes.
Controller
Processes and responds to events, typically user actions, and may invoke changes on the model.

All the UI technologies are offering a way to handle those 3 aspects and, as a consequence, are providing a programming model defining how information and events flow is handled through the MVC.

See below a simple schema I’ve made describing a GTK application: when you look at an application screen, it is made of graphical elements like buttons, lists, images, text labels, called widgets (or controls) .

Rmk: the term “widget” is used with its literal meaning : “window-gadget”, this term is now used a lot in web 2.0 marketing terminology and by Yahoo/Google/MS to represent a “mini application” that can be put on a web page or run through an engine on a desktop PC or a mobile phone, to avoid confusion I prefer the term of “control” over widget for the UI technologies, but will continue using “widget” in the rest of the GTK example as it is the term used by GTK itself.
Widgets are organized hierarchically in a tree, meaning that a widget can contain other widgets, for example a list can contain images or text labels. In the example below the “root” widget is called a “Window”, it contains a kind of canvas which itself contains a status bar, a title bar, a list and a softbutton bar. Then the list contains items, the title bar has a Label, the softbutton bar contains some buttons and so on.

A widget is responsible for

  • Its own drawing using a low level rendering engine, called GDK in the GTK case (GDK offers API like draw_image, draw_text, etc).
  • Computing its size according to its own nature (like the size of the text that will be displayed for example) and the size of its sons.
  • Reacting to some events and emiting some specific ones: the button will emit a “press event” when it is pressed with the touchscreen or when its associated keypad key is pressed.

The widget tree will propagate system events (keypad/touchscreen, etc) and internal events (redraw, size change, etc) through the widgets. The developer will register callbacks (in fact functions, piece of code implementing a functionality) that will be called when widgets will fire events (like the “press event”) .

image

GTK Widget tree structure: a phone screen example

The major GTK/gLib formalism is how those events/callback are handled: through what is called a “gloop” where all events are posted in the loop queue, dequeued one by one and “executed” in this loop, meaning their associated user callbacks will be called. This loop is running in one thread. This is what we call a programming model. In nearly all the UI technologies such a loop exists with various formalisms for the queue handling, event representation, etc.

To finish with the above schema the user callback will then access to the middleware services, the various databases and so on.

There is no clear MVC formalism in that case, the controller is mixed with the view …and even the model that is mixed … with the widgets! (so with the view)

Qt Model is really identical to the this one.

One last point very relevant for application development and design: the notion of states. Each application is in fact a state machine displaying screens linked by transitions, like in the example below where in the state 1 the user decides to write an SMS, it will open an SMS editor screen and clicking send will go to a selection of phone numbers.

image

Application State Machine: write sms example

Here is an attempt to formalize a modern UI framework with Data binding (for Model abstraction).

image

UI engines formalization
Control: equivalent to a widget but where the MVC model is fully split. A Data Model as to be associated alongside with a Renderer to make it usable.
Control Tree: equivalent to the widget tree: aggregation of Controls, association of the controls with a Renderer and a Data Model. Possibly specification of Event Handlers.
Data Model: Object defining (and containing when instantiated) a set of strongly defined and typed data that can be associated with a Control instance.
Data Binding: Service used to populate a Data Model.
Control Renderer: Object that is able to graphically represent a Control associated with a Data Model, using services from a Rendering Library.
Rendering Library: Set of graphical primitives, animations, etc.
Event Handling (and Event Handler): code (any language) reacting to events and modifying the current state machine, the Control Tree, etc.
Standardized Services: Interfaces defined to access middleware directly from the event handling code.
Server Abstraction: Possibility to transparently use Data Binding or any service call locally or remotely.

Ok if you are still there, and your brain is still functional, here is what’s happening today in this area….

In traditional UI frameworks like GTK, Qt, win32, etc the control tree description is done with a C/C++ description … a little niche technology have paved another way: it is called HTML: after all an HTML web page description is defining a tree of controls, W3C use a pedantic term for it : the DOM tree. JavaScript callbacks are then attached to those widget to allow user interaction. It is why all the new UI technologies are based on an XML description for this tree, it is muuuuuch more easier to use, and allow a quicker description of the controls, and above all it allows nice design tools to manipulate the UI….Apart from this XML representation the majority of the UI technologies are coming with:

  • An animation model, allowing smooth transitions, popularized by the iphone UI, but it was already there in MXML (Adobe Flex Format), XAML (MS format), SVG, TAT offer….
  • Modern rendering engines (Flash for Flex, MS has one, TAT Kastor).
  • Nice UI tools for quick implementation: Adobe Flex Builder, MS Expression line, TAT Cascades, Digital Airways Kide, Ikivo SVG…
  • In many case : a runtime, to be able to run a scripting language.

Here are some quick tables, really not complete, of some of the most relevant UI technologies in the PC and mobile phone space.

Just to explain the columns:

  • RIA : Rich Internet Application, delivered through a browser plugin
  • RDA : Rich Desktop Application: delivered through a desktop runtime
  • Runtime: ok, galvoded name here, just a name to represent the piece of technology that allows the UI to run
  • UI: Technology to describe the control tree (you know what it means now!)
  • Event Handling: the dynamic UI part, and how to code it (which languages)
  • Tools: UI tools

 

image

RIA&RDA Chart

 

image

Embedded rich UI technologies

So its time to answer the main point of this post: How those technologies are helping unleashing designers creativity? By defining a new development flow, allowing each actors to have a different role.

Here is an Adobe Flex “standard development flow:

image

Adobe Flex&Air tools flow

In the next schema I try to depict a more complete Adobe Flex flow, adapted to the mobile world, where, for me, a central piece is missing today: it is not possible now to expand “natively” the adobe air engine, this is mandatory for mobile platform with very specific hardware, middleware, form factors.

So I take the adobe flow more as an example to demonstrate how it should work, than as the paradigm of the best UI flow because this is not the case today (same remarks for MS flow, less true for TAT for example)

 

 

image

An Adobe UI design flow for embedded

This shows clearly that the “creativity” phases are clearly uncorrelated: different tools are used between the designers, they can do a lot of iterations together, without any need of the software engineer. This one can focus on implementing the services needed by the UI, optimizing its platform, adding middleware features.

  1. Interaction Designer defines the application high level views and rough flow
  2. Graphical Designer “draws those first screens”
  3. Interaction Designer import it through Thermo
  4. Graphical Designer designs all the Application graphical Assets
  5. Interaction Designer rationalizes and formalize what kind of data, events and high level services the application needs
  6. Interaction Designer & Software Engineer are working together on the above aspects HINT: A FORMALIZM IS MISSING HERE once done:
  7. Software Engineer prepares all the event, data services, test it unitarily, in brief: prepare the native platform
  8. Interaction Designer continues working on the application flows and events, trying new stuffs, experimenting with the Graphic Designer based on the formalism agreed with the Software Engineer.
  9. Once done … application is delivered to the Software Engineer that will perform target integration, optimization…and perhaps (hum certainly) some round trip with the other actors 🙂

So this is it! this flows really focus on giving power to the designers…taking it from the engineer hands. Some technologies are also missing to really offer a full Mobile Phone solution:

  • All the PC technologies are about building ONE application and not a whole system with strong interaction between application. With today technologies, the designers are missing this part….leaving it to the engineer: How to cleanly do animation between applications?
  • Strong theming and customization needed for:
    • product variant management: operator variants, product variants (with different screen sizes and button layouts for example), language variant (in many phones 60 languages has to be supported, but in separated language packs).
    • A not well known one: Factory line fast flashing of those variants. It is very long to flash the whole software of a mobile phone while on the factory line … so if you are able to have a big common part and a “customization” part as little as possible but with the full UI…you gain productivity…and big money 🙂
  • Adapted preset of widget or controls (try to do a phone with WPF or Flex…all the mandatory widgets are missing)

Anyway an UI technology is only the way to interact with a user…to offer him a service. Most of the technologies presented above are about service delivery and not only UI…My next post will be about this notion of service delivery platforms.

[Update] Replaced the term “ergonomics specialist” by “Interaction Designer”, thanks Barbara, see comments below.

Thomas Menguy

The perils of managing 3rd party software: do's and don'ts

[All consumer electronics, mobile and software companies have to in-source third party software – but the perils and complexities of managing that software are several, as the software moves through your organisation and out to the customer. Guest blogger Ã…se Stiller distills years of experience in software licensing in simple guidelines for managing 3rd party software]

Hydra.png
Sourcing 3rd party software to be integrated in consumer electronics like mobile handsets is a complex affair. There are multiple challenges and threats, in a way similar to the multi-headed Lernaean Hydra, the mythical beast that Hercules fought in Greek mythology. But finding a Hercules to manage your mobile software issues is easier said than done – so is managing the challenges of complying with license terms that you have signed with third parties, as the software itself navigates through your organization.

The particular challenge I would like to draw your attention to in this article is the risk for unauthorized distribution or reuse of the third software.

It is easy to see the need to assess a potential supplier, but it is equally important to turn around and take good look at your own company’s ability to safely manage the responsibility, End-To-End

Whether you are developing a 2D game or a complete application framework putting together software products is not always as “Lego-like” an activity as we like to draw in block diagrams to management. In many cases bringing in 3rd party source code is the only viable alternative, but you must be sure that everyone dealing with the code knows what they are doing.- and you can´t very well ask all engineers to study the contract. Pardon my French but it’s often hard to ask engineers to “RTFM” or to be precise “RTFC” (C for contract).

It´s is hard enough to get the right code, at the right price, in the right shape delivered at the right time but you must also get it with the right terms to fit well into your own development process, not to have to burden developers with unnecessary restrictions and contract details. Sourcing of software is and must be a team activity. Only a true Hercules can deal with all the Hydra-heads single handed.

Staying above the line of software commoditisation
Developing an application, game, application framework or any piece of software for mobile phones means adding unique and sexy features to a whole lot of commodity, and checking that nobody else gets there first. What was cutting edge technology two years ago is now all taken for granted – users don’t want to pay for Bluetooth or FM radio these days and 5 mega pixel cameras is slowly becoming the norm. The value line of software is increasing up the stack with each month going by.

For software companies, as new technology ceases to be a marketing differentiator and turns into standardized commodities, it becomes appropriate to share the costs for maintenance and further enhancements with competitors and partners in the same business. The easiest way to find this economy of scales is of course through sourcing software from Independent Software Vendors (ISVs) or by using software which comes under an open source license (e.g. Eclipse IDE and WebKit browser core).

Another good reason to source software is of course to get access to specific technology that you cannot, or cannot afford, to develop in-house. Some things are simply best done by experts, like preparing Japanese puffer fish and developing e.g. telephony modules for mobile phones.

In mobile phones as well as all other complex consumer electronics you will find a lot of common functionality developed and maintained by external ISVs. The type of components vary from highly visible functions like Web-browsers to completely anonymous drivers, and only very few suppliers will get their logo in a prominent place. A mobile phone would have to be the size of a car to allow for “NN-inside” stickers for all externally developed software components.

There may be many links in the chain between the original developer of a software component and end-user. A lot of the sourced components themselves come with software developed by others than the supplier, like e.g. open source parsers and specific security solutions, and the further away from the original owner of the IP the harder to keep track.
A good practice is to make sure that such components are declared well in advance before you sign an agreement to source software, specifically if there are inherited restrictions or obligations pushed on to you and your customers. Make sure that the sourced software doesn´t come with an undetected Hydra-head embedded in the license. If there is Open Source with obligations to disclose code to end-users this will affect you or your customer, or your customer’s customer.

The perilous journey of software through an organisation
Regardless of the reason for sourcing 3rd party software it is important to truly understand how the sourced component will be used in the internal development process, and how the component will be integrated with existing product. For example, there may be a need to distribute code to sub-contractors or pilot users, customers, additional development sites etc, during the development process . Your license must meet these needs and allow for that – or you must plan for an alternative way to work.

software-lifecycle-confusing.png

Deviations from standard development process may add to the overall cost for the sourced component, and must be taken into the cost/benefit analysis. Naturally you must not panic but maintain a healthy balance between the added cost to eliminate a risk and the weighted cost for an actual breach.

If the sourced component is self contained and easy to replace your worries are less, even with a un-permissive license. However with a less modular and more realistic dependency you need to be more cautious about handling the code.

A good way to reach an understanding of all your needs for re-distribution of a sourced component is by identifying the End-To-End journey for the software through your company. You will then be able to anticipate what distribution rights must be catered for in the contract, and to prepare for a change in the process if you cannot win these rights in the license.

Another benefit of such practice is that you will gain a better understanding of how the third party product is to be integrated with you product. You can describe the level of integration and understand how that will impact your negotiation, or choice of license for Open Source software.

The natural place for an end-to-end flowchart and for the description of the integration is in a Risk Analysis for the sourced component.
Such Risk analysis should also include the standard commercial risks, market risks, legal risks etc – but that is a different subject.

Plan for the software’s journey
There is no single solution to how to safely manage third party code through your development process; any solution must be case specific. Naturally it depends on the size of your organization and of how many will have access to the code, but it also depends on the level of integration with the rest of the product.

software-lifecycle-clear.png

Based on the End-To-End flowchart and the license for the software, you can provide a plan for the management of the third party code; you can identify an owner of the code in all stages of development, and if necessary specify a hand over process between development units, and eventually the hand over to your customers. You can define specific rules for the code and provide solutions before the issue becomes a problem.

As an extra bonus you may also find and be able to eliminate conflicts between licenses – e.g. an Open Source license forcing disclosure of code and another license prohibiting such disclosure. Another Hydra-head down.

Information is King
A development process is only as waterproof for the in-sourced 3rd party software as the team members who are involved in this process. By identifying the different units that will be involved with the third party component, you will be able to locate who needs to be informed about the rights and obligations that come with the component.
Knowledge about the restrictions for the component must be available for all those who actually access the code, all the time.

A good way to achieve this is to keep a simple database where you register the third party IP included in the product, and specify rules for how the code may be spread and used. Access to the information in the database must be granted to all those who come in contact with the code and therefore you may not want to add the contract as such to the database. Do not confuse this with a contracts database; that serves a different purpose.

Information in this database must be easy to find, easy to understand and interpret and it shall clearly explain what you must and must not do with the code. E.g. if you can only distribute to subcontractors or development sites that have been approved in the contract, these subcontractors and sites shall be listed in the database and if you can only distribute binaries to customers that must also be easy to detect for anyone who might need to release code to customers.

It is, of course, crucial that the information in the database is always correct and updated. Therefore someone must be identified as the owner for the information. The owner of the contract is my first choice since that is the person with the most to gain on this; the one who will be hit by the Hydra if something goes wrong or have a smooth and easy day at the office when all the questions are answered by the database.

Do’s and Don’t of software sourcing.
When you accept the responsibility and liability that comes with signing a software license for source code – Open Source or closed Source, you must not let price and liability totally overshadow the rights to use and distribute. Do your homework and get a good understanding of the intended use of the code, before agreeing the license terms. Analyze the risk and avoid problems further down the road by planning the management of the software End-To-End and publish the restrictions for the code where it is easy to find and to update.

Get the team to help define the contractual needs for the sourced software. My experience is that insufficient or unclear license rights too often blocks engineers from doing their job. If you can get the engineering teams to help you prepare the sourcing better by providing relevant input to how your company will use the sourced component, you stand a far better chance getting the contract right from start, and it will save you from panic changes to both contracts and project plans.

Educate all on a need to know basis. My experience also tells me that engineers in general don’t love contracts, or rules restricting their creativity, but they do need to know the “Dos and Don´ts” with the third party code they handle. Just putting rules in a database is probably not sufficient; a verbal run-through of the rules is a good support for the information in the database, and an excellent way to test if the database works for its intended audience. You will probably also learn if the rules set up for the third party IP is too restrictive or too complicated.

Talk to all stakeholders. Sourcing of software is teamwork involving every department in your company – save possibly for janitors. It takes a team to find and fight all the heads of the Hydra. In other words; to provide all the input for the risk analysis leading you to a good license agreement and a safe passage for the software through your development process, you need help from all the stakeholders. Open Source is no different from any other third party software in this respect.

– Ã…se Stiller

[Ã…se has lived through the pains of licensing software both from the selling and the buying side of the cooperation as part of UIQ and Teleca, and has survived to tell the tale and educate the rest of the industry.]

The SIM card evolution: finally, a breakthrough?

[Is there a future for the SIM card in operator service delivery? Research Director Andreas Constantinou reviews the state of the SIM card industry, the commercial developments in the last 12 months and discusses why the role of the SIM may be indeed coming to a positive inflection point]

SIMcards.jpgThe SIM card is ubiquitous; it’s in every GSM phone. It’s used to identify the subscriber to the mobile network. In fact, the SIM card is the most pervasive service delivery platform with 95% or more penetration in GSM markets.

Yet a paradox exists in the adoption of the SIM card for operator services. On one hand tier-2 and tier-3 operators have put the SIM card into innovative uses: the SIM has been used to deliver mobile banking in Czech Republic, mobile ads in Russia, ringtone downloads in Brazil, payphone use in Nigeria and automatic device detection in Austria.

On the other hand tier-1 operators have used the SIM mostly for basic applications such as managing missed calls and roaming lists; At the same time handsets have developed far superior user interfaces to the SIM’s text-based UI and operators have invested in Java, Flash and on-device portals for advanced service delivery, as opposed to SIM based applications.

It’s perhaps yet another reminder that innovation does not easily bubble up in large organisations like tier-1 operators, while cash-strapped tier-2 and tier-3 operators have been more resourceful and innovated using existing infrastructure.

Still the issue of adoption of SIM cards for service delivery by tier-1 operators is indeed a fundamental one, given that tier-1 operators are the largest SIM customers. Furthermore with the commoditisation of SIM card functionality, SIM card manufacturers will need to continue delivering new value in order for the whole SIM ecosystem to survive – and a win-lose situation is not tenable.

So is there a bright future for the evolution of the SIM card?

The SIM industry backstage
To understand the status quo, one needs to visit the backstages of the SIM card industry – an all-too-familiar site, for those observing the industry in the last few years. Operators (particularly tier-1s) have been clearly motivated to see SIM cards take a greater stake at service delivery, yet have been reluctant to invest in long-term initiatives without a business case for a 6-month RoI. As such, tier-1 operators have reverted to applications (e.g. on-device portals, active idle screens), platforms (Java, Flash Lite) or ‘container’ programs to further their service delivery aims. On the other hand, handset OEMs have until recently seen the SIM card evolution as a potential compromise to their own agenda and have been slow at adopting industry standards for SIM-enabled service delivery. And while SIM toolkit standards have been adopted in the vast majority of handsets (an estimated penetration of 95% of more), the potential of the SIM as a service enabler has been severely limited compared to the constantly increasing handset feature arsenal.

Last but not least, SIM card manufacturers have since 2006 proposed significant technology advancements, in terms of ‘smarter’ SIM software, near-gigabyte capacity and creative new applications, from blogging and widgets to advertising and idle-screen promotions. Yet these advanced SIM cards have always demanded a significant per-unit price, while the average selling price of ordinary SIM cards has been dropping as much as 30% year-on-year during 2006.

Yet the prospects for the SIM card evolution in early 2008 are not as dire as they may seem. A confluence of developments, both technology and commercial ones are marking an inflexion point for the advancement of the SIM card.

Change of scenes
Three major developments that took place in the last twelve months will likely impact the uptake of advanced SIM cards in 2008:
1. The price war on SIM cards has largely subsided; during 2006 competition from China and between the major SIM card manufacturers caused prices to drop by more than 30% year-on-year. Fortunately, the prices have now stabilised with Gemalto reporting a decline in average selling price ASP) of only 2% year-on-year for 4Q07 (compare this to declines of handset ASP of 5% or more for tier-1 OEMs in 2007).
2. The cost of NAND memory used in SIM cards (as well as PCs and removable storage media) has dropped dramatically in the last two years. The cost delta between a 256KB NOR memory and a 256MB NAND memory has dropped by an order of magnitude within the space of two years. This has helped make mega-SIM cards more affordable to mobile operators who are planning to source high capacity SIM cards.
3. The OMA standards body has been busy finalising the Smart Card Web Server (SCWS) specification, a technology for using the SIM card as an always-on web server that stores operator content, application settings and encrypted files for multiple applications. The SCWS specification is expected to be finalised soon, following three successful interoperability ‘testfests’ which took place between September 2007 and January 2008. The SCWS is a pragmatic specification that leverages the mature and widely used HTTP protocol to enable a range of solutions such as on-SIM portals, NFC, just-in-time customisation and DRM. The SCWS protocol requires a more advanced smart card OS, but no hardware upgrade and hence no increase in hardware BOM. This is in contract to high-capacity SIM cards, which impact not only the silicon BOM, but also add a requirement for two extra PINs to both the SIM cards and the reader terminal.

We are not there yet..
Despite the recent developments, and the great potential to address the application distribution barrier, the role of the SIM card has not really advanced beyond that of an authentication mechanism, particularly for tier-1 operators who command scale and dominate OEM terminal requirements. A few tier-2 and tier-3 operators in Latin America and Europe have been using the SIM card in applications such as banking, automatic device detection, ringtone download and idle-screen promotions. Yet tier-1 operators still appear reticent and somewhat undecided as to whether to invest in advanced SIM cards with SCWS capability and/or high capacity.

The primary reason has been pricing. SIM card OEMs have been bundling advanced OS capabilities in higher-end NOR cards of the 256KB and 512KB range, which operators don’t yet perceive the need for. The major handset OEMs have delayed plans to incorporate SCWS (and the necessary underlying BIP server support) due to the lack of an established base of SIM cards that support this functionality. Sagem and LG have committed to supporting SCWS within selected commercial handsets later in 2008, but there are no commitments as to the scale and the sustainability of OEM SCWS adoption.

Overall, the industry has been caught up in a chicken and egg situation where no player has been willing to risk investment in advanced SIM cards.

How to kickstart the system
Pricing and addressable market are the fundamental criteria that have caused the Ferris wheel of the SIM card evolution to remain still. Yet, there are still ways to kickstart the wheel and push the industry inertia into motion.

For that to happen, SIM card manufacturers need to redraw their pricing plans and figure out how to migrate the software BOM surcharge into service enablement post-sales revenues. Once SCWS capability is featured on standard SIM cards as a norm, the handset OEMs will be incentivised to support the many SCWS use cases and therefore produce compliant handsets via a software update. And once the industry Ferris wheel starts spinning into motion, the benefits for both handset OEMs and network operators will be compelling enough for the wheel to keep spinning for many years.. at least until the next S-curve arrives.

– Andreas

[Andreas is a moderator at the forthcoming SIMposium conference in Berlin, 22-23 April, the annual SIM mega-event for the mobile industry.]

Learnings from the Mobile World Congress: 10 predictions for MWC 2009 (part 2)

2009_beachfront

[In part 2 of his predictions for MWC 2009, Research Director Andreas Constantinou talks about M&As amongst Linux vendors, OHA devices, enterprise UIs, the challenges for Modu, and the unstable future of UIQ]

Check part 1 for learnings from MWC and predictions on a new Trolltech, the evolution of widget solutions, the relicensing of Qt, acquisitions of WebKit vendors and Danger devices.

Prediction 6: M&As in the Linux vendor landscape
Analysis: Mobile Linux has gone through two phases; the first phase (2000-2006) was the OEM in-house efforts from Motorola, NEC and Panasonic who developed their own middleware and applications on top of MontaVista (and Qt/E in the case of Motorola). The second phase (2004-now) has been the emergence of for-license Linux-based software stacks from MontaVista (the incumbent), WindRiver, OpenMoko, Mizi, Access ALP, Azingo, A la Mobile and Purple Labs. Many of these vendors also offer integration, customisation, productisation and certification services on top of their software stack, as shown in the next diagram. (note that Qt/Qtopia are missing from this chart because it is still not known whether they will be offered under commercial license terms following the Nokia acquisition).

Mobile Linux Landscape

In practice the above taxonomy of mobile Linux vendors is rather simplified and the devil is in the details; OpenMoko is six months late and probably six more months before being mature enough for v1; Mizi has recently announced a re-developed version of a low cost stack and is looking for customers beyond Korea; Access ALP still has teething problems and its MWC demo was unimpressive; Azingo is well funded and has a quite stable & complete stack incl. WebKit, but has only started to build a services arm; A la Mobile is underfunded for its claim as the ‘Red Hat of Mobile’, while WindRiver appears to be taking on that role; and finaly Purple Labs is already on three European phones and has a single-core Linux stack ready for licensing.

Naturally, ten mobile Linux vendors is far too many, while financial challenges will be setting in this year and OEMs will be making hard decisions about which stack/integration vendor to choose. MontaVista had a surprisingly small stand at MWC this year, while it has been losing many head-to-head bids to WindRiver. A la Mobile has publicly only been funded with $3million, a far cry from the $30million that Azingo (ex Celunite) has gotten to date. It is likely that MontaVista or A la Mobile would be looking for another financing round or for an exit. [update: A la Mobile secured a second round of 6.75 million from Venrock in February. That’s enough to power a startup for 2 years, but is it enough to build a services organisation?]

On the other hand Purple Labs has been under the radar until recently when its ownership structure changed with the majority ownership moving from Vitelcom (Spanish ODM) to Sofinnova Ventures (altough details of the deal are sketchy). Interestingly, Purple Labs has a mature software stack already in three European Linux-based phones and claims to have the first single-core Linux stack already on a soon-to-be-commercial phone. Yet despite the strength of its technology (and the hardware design expertise of its team) PurpleLabs is lacking the professional services arm that will aid OEMs in integration and productisation projects (any takers out there?). [update: PurpleLabs has a proportionately-sized pre-sales team, but most importantly has a strong management team incl. the ex-head of Openwave prof. services].

Exits, IPR acquisitions and company acquisitions are therefore likely for mobile Linux vendors by MWC 2009.

Prediction 7: OHA devices; cheap but ugly
Analysis: The Open Handset Alliance and master-chef Google have been cooking the Android SDK for quite some time and the first development boards were shown running Android ‘officially’ at MWC. There are no counter-indications that Google will be able to hit its 2H08 promise for the first Android devices; several chipset vendors have been integrating the Android stack and HTC (followed by Samsung) has significant expertise in bringing up a ‘virgin’ OS into a mature phone software, as it did for Windows Mobile in 2002-4.

The well-architected stack that is Android (incl. plug-and-play core apps and J2SE-like environment) will likely be targeting mass-market devices; having a US-based company as master-chef and HTC as the host, this probably means low-BOM devices with a PDA-like form factor. In other words, a low price and data-first design will be very much a priority compared to the looks and phone-first design (very much like Windows Mobile devices thus far).

Prediction 8: Enterprise UIs emerging
Analysis: The enteprise segment has traditionally been seen as completely contrary to consumer segment from a functional requirements perspective; consumer devices have to be fun and sexy, whereas enterprise devices have to be function-first and stripped down of most aesthetic features. But wait.. who said enterprise people are boring?

I believe that some innovation on the user interface and the plastics of enterprise devices is in order. And while plastics innovation is too much of a gamble, UI innovation isn’t (you can change UIs easily with many of today’s UI frameworks). Therefore, I foresee that at least one vendor will be offering enterprise-targeted UI frameworks which provide both eye-candy and functionality such as word/Excel/PPT/PDF document viewing and rich email (Picsel comes to mind). This also means that the boundaries between enterprise-targeted mobile OSes and consumer-targeted OSes will be bluring, which is also the direction taken by Windows Mobile 7 featuring a customisable UI layer (a major functional delta from all previous Windows Mobile versions).

Prediction 9: The challenges of Modu
Analysis: Modu made big headlines at MWC, not only because of its huge marketing spend, but also the innovative nature of its connected device offering. Modu offers a mobile (cellular) building block which is at the center of a connected personal area network of mobile devices. Like many previous attempts at creating a distributed devices environment (most notably IXI), Modu is based on moving cellular connectivity into the centre of a connected devices framework and thereby making it much easier to design, develop and market mobile devices. In principle, the paradigm of a distributed devices environment is a win-win-win for operators, manufacturers and users (as I advocated on this 2002 IEEE paper on this very subject). However, the challenge is in bootstrapping the ecosystem of operators, device and services vendors to invest in this new paradigm of building connected devices.

What Modu has done is quite clever; Modu did not develop just an OS for powering this connected device ecosystem (like IXI) or just the connected devices prototypes (see Motorola’s wearables distributed devices collection designed by Frog agency). Modu created the physical building block (a nano-phone, so to speak) that can form the nucleus of the distributed devices system, making it easier to bootstrap an ecosystem around it.

However, Modu is still facing a major challenge; convincing OEMs to build devices around its building block. That means putting its money where its mouth is and funding (or at least part-funding) several handset projects, which is clearly a very expensive exercise. More importantly today’s handset OEMs are more keen to invest in services rather than over-innovative handset designs. To convince OEMs to build on top of its building blocks, Modu therefore has to create a framework for delivering services around it.

There are a handful of service companies who are today creating service frameworks; Google, Yahoo and Nokia come to mind. Service frameworks are inherently a loss leader; there’s no money to be made by designing, developing and supporting the service connectivity framework (see Android, widgets/Go 3.0/OneConnect and Qt, respectively). However there is money to be made from enabling service delivery and access (see Google ads, Yahoo ads/services and Ovi, respectively).

Therefore Modu’s challenge is in creating a loss-leader framework for delivering connected services around its cellular building block and convincing OEMs that this should form part of their service investment strategies.

Prediction 10: The unstable future of UIQ
Analysis: Motorola invested in a 50:50 ownership of UIQ alongside Motorola back in October, in what amounted to a diversion for the company’s Linux strategy. UIQ’s expansion (now 400+ people in Ronneby and Budapest – almost a tripling in numbers within a year) means that the venture has much higher costs than revenues. If you do the numbers, it turns out that UIQ must ship at least 6 million devices annually (at $3 per-unit royalty) in order to sustain its workforce OPEX. This is a far cry from the 1.2 million it shipped in 2006 but close to the 7.7 million estimated for 2008 (both figures from Nomura). UIQ must therefore ramp up volumes very fast in order to sustain its OPEX.

The real challenge with UIQ however comes with sustaining the underlying Symbian OS strategy. With UIQ’s ownership transfered out of the Symbian, Sony Ericsson and Motorola are now arch-rivals for Nokia, who controls the majority of Symbian shares (and in practice most of the decisions taken by the Symbian board). It has for long been rumoured that Nokia has been working on a new-generation OS, but rumours aside, UIQ should have continual challenges in driving the features and strategic agenda for Symbian OS towards favouring UIQ. Moving UIQ to a different kernel support package (some Linux flavour) is a very expensive 2-year operation that UIQ would not easily venture into, given its financial state and the instability of its parent Motorola.

Clearly a lot to look forward to until Mobile World Congress 2009..

– Andreas

(while on the topic of predictions, make sure to check out our hugely successful Mobile Megatrends 2008 series. Full presentation below.)

[slideshare id=209579&doc=mobile-megatrends-2008-vision-mobile-1198237688220186-3&w=425]

Website redesign!

We ‘ve just launched our new website with a completely redesigned look & feel!. After a year in the making and working closely with two design agencies, the website is finally alive and kicking. Thanks to Paul at fifty50 and Savvas at Peel-Me for all their hard work. Feel free to browse through the site and let us know what you think!

Site redesign

San Francisco next week?

Continuing with our successful 360 degree workshop on Mobile Open Source, we ‘ll be delivering the workshop as part of Informa’s Open Source in Mobile conference in San Francisco conference on March 10. This one-day intensive workshop is a must for companies wanting to understand the economics, legal issues and complex landscape of Linux and open source software vendors in the mobile industry, and make informed decisions on their own positioning.

Check here for more info on the Informa Open Source in Mobile conference, or drop us a line if you are in the San Francisco area and want to meet up.

– Andreas