The inner secrets of the 100 million unit club

[ever thought how hard it is for mobile software companies to penetrate the mobile space? guest blogger Morten Grauballe introduces the ‘100 million unit club’ of successful mobile software firms and spins a tale of myth and reality for making it big in the mobile phone industry]

2007 became the year when mainstream Silicon Valley decided to attack the mobile phone market head-on. With over 1 billion mobile phones shipped every year and the market moving towards 3 billion mobile subscribers, you can understand why.

Apple started the year by announcing the iPhone. Half way through they started shipping and quite successfully too. The incumbent players took notice believe me. Then to make 2007 a real year of change, Google announced Android a new platform meant to change the dynamics of the value chain. It is free (in a royalty sense) and with a strong focus on allowing internet applications and services (to make money). Apple has also announced that it will open up the iPhone for native applications in 2008. It is a complete onslaught on the mobile phone market.

So, if you are a large software player in the PC or internet space, then 2008 seems like the perfect year to penetrate the market and get onto those 1 billion units. You can easily envision the following conversation taking place in well-establish software players from San Francisco down to San Jose:

CHoM (Clever Head of Marketing): Over 1 Billion mobile phones every year that is too good to be true!….How do we penetrate this market? How do we get to the biggest installed base of users?
RAG (Resident Architect Genius): Not sure
CHoM: Java seems to be a good option there are millions of java-enabled phones in the market
A little later .
RAG: I had a look .Java ME does not give good access to a broad set of APIs. Also .there is significant Java fragmentation across handsets complete nightmare, if you ask me!
CHoM: I got it! We will move to native programming – Smartphones are taking off!
RAG: Hmm .Symbian OS, with the largest installed base, is on single digit percentage market share.
CHoM: But if we add Brew we will get a few more percentage points! [in denial!]
RAG: We are still nowhere near 1 billion units!
CHoM: What about adding Windows Mobile? Or the new Android thing? [Now completely in denial!]
A few hours later
CHoM: So in summary, we need to port to 8-12 different operating systems to be successful!
RAG: Yep and most of these operating systems do not have publicly available SDKs! [clearly enjoying himself]
CHoM: What ? [Almost crying!]
RAG: Finally….you should know that there is no distribution method for getting software onto phones! [Big grin!]
CHoM: ..! [in tears]
A few more hours .
CHoM: So what you are saying is ..we need a relationship with the handset manufacturers to get the SDKs and to get our software embedded into their phones! [with a hardened sense of realism!
RAG: Spot on, boss!

In a world like that, it might be surprising to newcomers (like CHoM and RAG above) that there are successful software players in the mobile phone industry. There are in fact quite a few. When your software is on 100 million phones globally, then you have joined the 100 million unit club . Some of the leading members of this club are:
Adobe (formerly Macromedia) provides the Flash Lite execution environment
Access provides a successful mobile browser
Beatnik provides the polyphonic ringtone engine on most mobile phones
Packet Video provides the audio and video technology, i.e. for the Verizon V-Cast music service
Opera provides a successful mobile browser
Red Bend Software provides the majority of Firmware updating Over-The-Air (FOTA) software
T9 provides the predictive text engine found on a lot of phones
The Astonishing Tribe (TAT) provides the graphics engine that drives a lot of UIs in the wireless industry

By studying the approach of these companies, newcomers can learn a lot about how you tackle the world of mobile. What do they do right?

First of all, they all have excellent products that excite not only the mobile operators, but also bring true value and benefits to the consumers around the world. Without this, you should not even try to enter the mobile phone market.

Secondly, these companies embrace complexity, rather than trying to ignore it or wait for it to disappear. Most, if not all, members of the 100 million units club have ported their software to the 8-12 leading operating systems in the industry. Where applicable they will have a Java version (like Opera Mini) and a native version (like Opera Mobile). They have also invested in the art of software optimization (something not always needed on a PC), which allows them to move into the mid-tier and low-tier segments of the market. They also understand the complexities of software distribution. When appropriate they will have relationship with the handset manufacturers. At other times, the will use the portals for the mobile operators or independent service providers to distribute their solution.

Thirdly, these companies understand the market dynamics of the global mobile phone market. Some markets are operator-led, while other markets are more OEM-led. If, for instance, you have managed to get your software embedded on some of DoCoMo s MOAP-S based handsets in Japan, then your next port of call should probably be the S60 or UIQ licensees in Europe. If you manage to get on these handsets, then you have an opportunity to move to the proprietary operating systems of these licensees. Gradually you expand your market to more and more platforms across the various markets in the global mobile industry.

Finally, all of the above companies have participated actively in standards work. To get acceptance for your solution, it important for all the players in the value chain (mobile operators as well as handset manufacturers) that your software or service is based on open APIs and protocols that other people can add value to and support.

(In coining the term the 100 million unit club , I have ignored web programming. In our brave new world of web 2.0, that is admittedly a crime which I am sure web 2.0 fanatics will nail me for. The fragmentation and appropriateness of web programming for mobile phones is however a big topic in itself and is probably better left for a separate blog posting).

Lessons in a changing market
Basing recommendation on extrapolations from the past is always dangerous in a dynamic market. Let’s therefore also look at some of the changes taking place right now. These trends could determine who will and who will not be members of the 100 million unit club in the future.

Open operating systems are definitely gaining market traction. Linux, Windows Mobile, Symbian, and a few others are now responsible for close to 10% of the market. There is still an ongoing debate in the market as to whether they will make up 20% or 50% of the market someday. Whatever your view point, it is not going to happen overnight, and in the short term, Apple’s OS X and Google s Android platform are two new operating systems that need to be taken into consideration. Platform de-fragmentation is clearly not a trend to bet on in the next 2-3 years. In the 5 year time horizon, it might be.

The good news about the increased competition in the platform market is that SDKs, tools, and support from the large platform providers are improving rapidly. It is therefore becoming easier for the software players to embrace the complexity as described above. Software is becoming more portable.

If we move from the world of software platforms to the world of software distribution, there is more help to be found. The Open Mobile Alliance ratified the specification of Device Management (DM) in early 2004. At the heart of the OMA DM standard, there is a well-designed protocol which enables the service provider to query any handset for its basic characteristics (like model number, firmware version, and settings). According to Ovum (Nov 2007), there is now an installed base of 235 million handsets with OMA DM support. This will grow to 50% of all handsets by the end of 2008. With both handset manufacturers and mobile operators actively using this protocol to provision settings and new software to handsets, it is becoming possible to distribute software post-launch. All of a sudden, you know which handsets are attached to the network and you can offer new features and services. For those software players who are already comfortable with the complexity of the platform market, this is an opportunity to accelerate time-to-market and up-sell new software or services once you are on the handset. The completion of SCoMO (Software Component Management Object) with in the OMA will further accelerate this trend.

2007 was a very exciting year for software providers in the mobile market. Players, who understand how to navigate the new world of mobiles have a lot to gain. Good luck and Happy New Year to all new candidate members of the 100 million unit club!

– Morten

[Morten Grauballe is EVP Marketing at Red Bend and ex VP Product Management at Symbian, and has been in the mobile industry long enough to boast both scars and medals]

Boosting internet in mobile: the return of the browser proxies (mobile megatrend series)

(browser proxies are back in fashion.. guest blogger Fredrik Ademar looks at the limitations of today’s mobile web and how browser proxies have resurfaced to bring the internet to the masses. Part of our Mobile Megatrends 2008 series).

Struggling with the limitations of the mobile web
Numerous attempts, more or less successful and well-known, have been made over the past years to replicate the browsing experience provided on a desktop device also in the mobile context. Latency, low bandwidth, limited input capabilities and small screens have typically been main hurdles to overcome to really get something remotely close to the original web experience. A quite interesting trend in the mobile browsing space that now has gone through an exciting renaissance, is the concept of bringing in network proxies to intercept the web-to-mobile traffic and optimize. The most well-known example is probably Opera Mini, which truly has made a significant impact on how mobile web is perceived by the masses. But Opera is only one of many contenders in this space, and there is a set of different initiatives providing similar functionality and benefits (although in slightly different packages) such as Bitstream ThunderHawk, InfoGin, Flash Networks, Novarra, WiderWeb, Google Wireless Transcoder etc. The trend seems clear going forward – this could indeed be the answer to the quest for a truly pervasive web experience across mobile and desktop. Or maybe we are hoping for too much?

Ways to address the problem
To begin with one should note that the solutions provided are not by any means new concepts. The ideas can be found already back in the early WAP days, and many of the issues that now attract attention, were in fact exactly the same that WAP attempted to address with the original WAP gateways etc. In retrospect one of the major problems with WAP was that the ambitions were stretching too far. For instance using SMS and USSD as transport mechanisms was a bad idea from the very beginning, and this seriously harmed the priorities and technology trade-offs made. However, one important assumption was right, the insight that simply applying the classical W3C standards to the mobile space was not going to do the job and that is still the case today.

Standards like HTTP and HTML (with Javascript, CSS etc.) are simple and straightforward, but also pretty verbose formats quite unfit for a mobile environment. Applying these on top of standard TCP as transport does not really match the need for a responsive and user friendly mobile web service. To some extent it is really a no-brainer to identify potential solutions, and the most straightforward and natural approach is to introduce an intermediate proxy, which translates and optimizes the traffic over the air interface, still maintaining the legacy structure and protocols on the server side. Typical functionalities included in the available solutions are things like page pre-rendering and reformatting, image and data compression, intelligent proxy caching, image size reduction, session tracking etc.

These functionalities can be basically be categorized in the following three technology segments (based on the excellent taxonomy of browser proxies at the S60 browser blog):

Speed proxies
Purpose: image compression, efficient page contents caching, HTTP & content pipelining.
Examples: Bytemobile, NSN, Flash Networks (NettGain), Venturi VServer, Novarra
Adaptation (transcoding) proxies
Purpose: page reformatting, image reduction, menu simplification, session tracking, SSL session handling, XHTML/MP adaptation
Examples: ByteMobile, InfoGin IMP, Google Wireless Transcoder (ex Req Wireless), Novarra nweb, Volantis Transcoder, WiderWeb, Greenlight Wireless Skweezer, Clicksheet
Server based (pre-rendering) proxies
Purpose: pre-renders page before sending and improves navigation
Examples: Opera Mini, Bitstream ThunderHawk

A speed proxy typically makes the mobile browsing faster and reduces data to a certain extent, while it still preserves a full page. Adaptation and server-based browser proxies on the other hand will drastically reduce the amount of data sent over the air, but at a significant cost since the page will no longer be the original web experience. Often the page is re-formatted into one long narrow column (like e.g. Opera SSR), and dynamic effects like pop-down menus and pop-up windows will not work.

Bringing in the proxies, pros and cons
When benchmarking these products in terms of performance, the improvements are indeed often significant. Content size is reduced to 10-50% of the original size, and the downloading of typical sites can be done in half the normal browser download time (ballpark figures from Opera Mini). Since much of the heavy lifting is done in the network, an interesting side effect is also that the CPU, memory etc. requirements for the device are much lower. It is even possible to deploy solutions to devices post sales that will mobile web enable them, even if they did not have that kind of support from the beginning (using e.g. java based approaches like with Opera Mini)

Ok, this sounds great – are there really no weaknesses with the browser proxy approach? Yes there are. A common problem highlighted is the lack of true end-to-end security, as well as the problem of ensuring integrity of the transferred data. These problems are difficult to get around given the nature of the architectural setup.

Another very relevant problem is the fact that when applying different automatic intelligent conversion algorithms on content, you do indeed tend to violate the original intent of the content author. You can never replicate 100% of the experience on the desktop web, and in many cases content gets optimized away completely (like e.g. flash content).

Another typical comment is that networks and device hardware are getting more capable each year, and solutions including anything other than standard web browser technology, will quickly become obsolete. I think this assumption is completely wrong, there will always be a gap between mobile and desktop web – the mobile device will always be more limited and therefore needs to be treated differently.

Building a business case
As always, the technology roll-out also needs to be coupled with a sustainable business model. Where is the money in all this? Besides the services of providing the core browser experience, there are lots of value added services like billing, content filtering etc. that can be applied, but the true value lies in the fact that companies in this space are right in the middle of a giant flow of very targeted user data going back and forth. Carefully catered this asset can prove to be far more valuable than what can be made from the original service – with that said it is really no surprise to find Google (Google Wireless Transcoder) as one of the contenders in this segment.

A megatrend going forward?
As a future outlook for 2008, the mobile browser proxies will continue to provide an increasingly important contribution to the mobile web experiences, especially important harnessing the value of the long tail. This time, there is no doubt the proxy based browser model is here to stay, but it will typically not be perceived as a ground breaking revolutionary step, more as a natural and obvious evolution. We will also likely see a consolidation of technical solutions, as some players in the space today are to some extent not providing scalable and competitive enough solutions.



Execution engines: understanding the alphabet soup of ARM, .NET, Java, Flash …

[mobile development platforms, execution engines, virtualisation, Flash, Java, Android, Flex, Silverlight.. guest blogger Thomas Menguy demystifies the alphabet soup of mobile software development].

The news at All About Symbian raised a few thoughts about low level software:

Red Five Labs has just announced that their Net60 product, which enables .NET applications from the Windows world to run unchanged under S60, is now available for beta testing.

.NET on S60 3rd Edition now a reality?

This is really interesting: even the battle for languages/execution environment is not settled!

For years Mobility coding was tightly coupled with assembly code, then C and in lesser extent C++. The processor of choice is the ARM family (some others exist, but no more in the phone industry)…this was before Java.

Basically Java (the language) is no more than a virtual processor with its own instruction set, and this virtual processor, also called a Virtual Machine, or JVM in the case of java, simply does what every processor does: it processes some assembly code describing the low level actions to be performed by the processor to execute a given program/application.

On the PC other execution engines have been developed: the first obvious one, the native one is the venerable x86 instruction set: thanks to it all the PC applications are “binary compatible”. Then Java, and more recently … the Macromedia/Flash runtime (yes Flash is compiled in a Byte Code which defines its own instruction set). Another big contender is the .NET runtime…with you guessed what, its own instruction set.

At the end it is easy to categorize the executions engines:

  • The “native” ones: the hardware executes directly the actions described in a program, compiled from source code to a machine dependent format. A native ARM application running on a ARM processor is an example, or partially for a Java program that is running on an ARM with Jazelle (some Java byte code are directly implemented in hardware)
  • The “virtual” ones: Java, .NET, JavaScript/Flash (or ActionScript, not so far from JavaScript: the two languages will be merged with the next version: ActionScript 3 == JavaScript 2 == ECMAScript 4) where the source code is compiled in a machine independent binary format (often called byte code)…But how an ARM emulator running on an x86 PC may be called? you guessed, virtual.

So why bother with virtual execution engines?
Java has been built with the premise of the now famous (and defunct) write once run everywhere, because at that time (and I really don’t know why) people were thinking that it was enough to reduce the “cross platform development issue” to the low level binary compatibility, simply allowing the code to be executed. And we know now it is not enough!

Once the binary issue was fixed, the really big next one were APIs (and to be complete the programming model) … and the nightmare begins. When we say Java we only name the Language, but not the available services, same for JavaScript, C# or ActionScript. So development platforms started to emerge CDLC J2ME .NET framework, Flash, Adobe Flex, Silverlight, Javascript+Ajax, Yahoo widgets … but after all what are GNOME, KDE, Windows, MacOS, S60, WinMob ?…yes development platforms.

The Open Source community has quickly demonstrated that binary compatibility was not that important for portability: once you have the C/C++ source code and the needed libraries plus a way to link everything, you can simply recompile for ARM/x86 or any other platform.

I’ve made a big assumption here: you have “a way to link everything”. And this is really a big assumption: on many platforms you don’t have any dynamic link, nor library repository or dynamic service discovery…so how to expose cleanly your beloved APIs?

This is why OSGI has been introduced, much like COM, Corba, some .NET mechanisms, etc : it is about component based programming, encapsulating a piece of code around what it offers (an API, some resources) and what it uses (API and resources).

Basically an execution engine has to:

  • Allow Binary Compatibility: Abstracting the raw hardware, ie the processor, either using a virtual machine and/or a clean build environment
  • Allow clean binary packaging
  • Allow easy use and exposition of services/APIs

It is not impossible for virtual engines to dissociate the language(s) and the engine: Java …well for Java, ActionScript for Flash, all the # languages for .NET. An execution engine is nothing without the associated build chain and development chain around the supported languages.

In fact this is key as all those modern languages have a strong common point: developers do not have to bother with memory handling, and as all the C/C++ coders will tell you it means around 80% less bugs, so a BIG productivity boost, but also (and it is something a tier one OEM confirmed): it is way to more easyily train and find “low cost” coders for those high level languages compared to C/C++ experts!… another development cost gain.

A virtual execution engine basically brings productivity gain and lower development cost thanks to modern languages ….. but we are far far away from “write once run everywhere”.

As discussed before it is not enough and here comes the real development environments based on virtual execution engines :

  • .NET framework platform : an .NET VM at heart, with a big big set of APIs (this is what I would like to know what are the APIs exposed in Red Five Labs s60 .NET port)
  • Silverlight : also a .NET VM at heart + some APIs and a nice UI framework
  • J2ME: a JVM + JSR + …well different APIs for each platform
  • J2SE: a JVM + a lot of APIs
  • J2EE: a JVM + “server side” frameworks
  • Flex : Adobe Action Script Tamarin VM + Flex APIs
  • Google Android: Java VM + Google APIs,… but more interestingly also C++: as android use Interface IDL description C++/Java interworking will work (I will have to cover it in length at another post)
  • …and the list goes on

What really matters is the development environment as a whole, not simply a language (for me this is where Android may be interesting). For example the Mono project (that aims to bring .NET execution with Linux) was of limited interest before they ported the Windows Forms (Big set of APIs to make graphical stuff in .NET framework) and made them available in their .NET execution engine.

What I haven’t mentioned is that the development costs gain allowed by modern languages comes at a cost: Performance.
Even if Java/.NET/ActionScript JIT helped partially for CPU (Just in Time compilers: VM technology that translates virtual byte code to real machine code before execution), it is still not the case for the RAM used, and in the embedded world the Moore law doesn’t help you, it only helps to reduce silicon die size, to reduce chipset cost, so using a virtual engine actually will force you to … upsize your hardware, increasing the BOM of your phone.

And it isn’t a vague assumption: when your phone has to be produced in the 10 millions units range, using 2MB of RAM, 4MB of flash and an ARM7 based chipset helps you a lot to make money selling at low cost….some some nights/days have been spent optimizing stuff to make it happen smoothly very recently…

Just as an example what was done first at Open-Plug was a low cost execution engine, not virtual, running “native code” on ARM and x86, with a service discovery and a dedicated toolchain: a component platform for low cost phones. Then it has been possible to add a development environment with tools and middle to high services.

A key opportunity may be for a single framework and multiple execution engines for easy adaptation with legacy software and productivity boost for certain projects/hardware, or some parts of the software.

And in this area the race is not over, because another beast may come in: “virtualization” . In the above discussion another execution engine benefit was omitted: this is a development AND execution sandbox. This notion of sandbox and the last argument about performance comes really essential when you need to run a time critical code on one hand and a full blown “fat” OS on another, to be more specific if you need to run a GSM/UMTS stack written on legacy RTOS and an OpenOS (like Linux) on a single core chipset. Today this is not possible, or very difficult: it may be achieved by low level tricks if one entity master the whole system (like when Symbian OS where running in a Nokia NOS task), or with real virtualization technologies like what Virtuallogix is doing with NXP high end platforms. And in that case the cost gain is obvious: single core vs dual core chipset….

But why bother with virtualization and not rewrite the stacks for other OSs? because this is simply not achievable in our industry time frame (nearly all the chipset vendors have tried and failed).

And again the desktop was the first in this area (see VMware and others): Intel and AMD should introduce some hardware to help this process…to have multiple virtual servers running on a single CPU (or more).

So where are all those technologies are leading us? maybe more freedom for software architects, more productivity, but above all more reuse of disparate pieces of softwares, because it does not seem possible to build a full platform from scatch anymore, and making those pieces running in clean sandboxes is mandatory as they haven’t been designed to work together.

Anyway once you know how to cleanly write some code running independently from the hardware, you have to offer a programming model! Implying how to share resources between your modularized pieces of code…and in that respect execution engines are of no help, you need an application framework (like Hiker from access, Android is about that, but also S60 and Windows Mobile, OpenPlug ELIPS, …): It will abstract the notion of resources for your code : Screen, Keypad, Network, CPU, memory, … but this is another story, for another post.

Feel free to comment!


Nokia's Ovi equals S60 squared

istock_000000471569medium.jpgUnless you ‘ve been hiding in a cave, you will have read about Nokia Ovi, the Finnish giant’s portal for internet-borne services delivered on your mobile. What’s not been talked about widely is how Ovi relates to Nokia devices and S60.

Launched in 2002, S60 has been Nokia’s software platform which delivers an application framework, key middleware, core applications and user interface on top of the Symbian OS platform. For the last few years, the vast majority (circa 65%) of Symbian devices have shipped with S60 on top, and in the form of Nokia’s own devices. But I ‘m digressing.

S60 has been Nokia’s strategy to extend its market share in the value chain beyond its own 40%. The manufacturer has long realised that extending far beyond 40% of the mobile device market is pretty hard. As such Nokia developed S60, an in-house software platform that can be licensed to other manufacturers. In creating this strategy, Nokia envisaged that many OEMs would take up S60 which would translate to a meaningful addition to its revenue base. It’s worth noting that contrary to the S40 software platform, S60 incurs far greater costs in maintaining and upholding APIs, catering to developer needs and handset OEM differentiation requirements.

S60 has therefore been Nokia’s strategy to extend well beyond it’s own device market share and reap licensing revenues from competing OEMs. As history has taught, very few models and volumes of non-Nokia devices based on S60 have shipped to date, compared to the 100M+ Nokia S60 devices.

Visualising Nokia’s Ovi strategy
Interestingly, Ovi is an extension of S60, for the connected device age. Ovi is about channeling services (e.g. music and video sharing, widgets, location services, and storage-in-the-cloud services) onto mobile devices. In this sense, Ovi is an extension of S60, but with lower costs. To deliver an Ovi service, you need an enabling client application, not a complete software platform.

What more, Ovi is about extending service delivery to connected devices beyond mobile; PCs, set-top boxes, home entertainment and other appliances. And it’s about bringing those services to the consumer irrespective of the device (mobile or fixed) or the medium (over the cable or over the air). If we were to represent mobile devices as one dimension and the spectrum of connected devices as another dimension, a very revealing relationship between Ovi and S60 forms, which lends well to visualising Nokia’s Ovi strategy.

Visualising Nokia   s Ovi strategy

Ovi = S60 squared.

Thoughts ?

– Andreas

Do we really need femto cells?

A femto cell is currently the smallest implementation of a cellular network. It is designed to be placed in each home and enable ordinary mobile handsets to communicate with the mobile network through broadband connections, including cable or xDSL. Femto cells operate on the same licensed spectrum that is used in macro and micro cells but only have a range of tens of meters, to cover the area within the home. They bring a whole new value proposition to mobile operators and enable them to enter a previously unreachable market: the home environment.

But do we really need femto cells ?

The Femto Forum has been formed by seven early femto cell innovators mostly in the UK (including IPAccess and Ubiquisys) during July 2007 and attracted several heavyweights during the summer of 2007, including ZTE, NEC, Alcatel-Lucent, Nokia Siemens Networks, Motorola and ZTE. The forum currently consists of 50 members that are distributed across the mobile value chain. The forum has created four working groups tackling technical, business and marketing issues and aims to minimize fragmentation in this new market.

Why is there a need for such small cells?
The most efficient way to increase network capacity in a cellular network is to shrink the cell size – ok, there are other ways, including getting new spectrum, sectorization, adaptive algorithms for scheduling but all are semi-disruptive and cannot compete with a smaller cell size. However, in an archetypal mobile network, the cost to deploy a network with many small cells in data hungry areas is prohibitive. Femto cells piggyback on broadband connections and are relatively inexpensive and can effectively form a distributed high capacity network. On a much simpler usage case, femto cells can provide coverage where ordinary cells cannot, in highly populated areas where propagation issues are a concern.

Mobile cell comparison

(Although pico and femto cells may appear similar, a pico cell connects to a base station controller to extend coverage in areas without, e.g. enterprise locations. Femto cells may include some form of a base station controller and are more intelligent).

What femto cells really propose is revolutionary for mobile and fixed operators, assuming that they aim to provide more than just coverage in the home. Saying that, femto cell application is most likely to depend on the region it is being deployed in: Western Europe is most likely to use femto cells for advanced data services, while North America is more likely to see femto cells for coverage in remote areas where low traffic does not justify a typical base station.

Are femto cells valuable as marketed currently?
First of all, I am not convinced that fixed operators will be happy to see mobile operators piggybacking their broadband connections and generating revenue through them, cannibalizing bandwidth that could otherwise be used for fixed services. Although it is likely that some form of agreement will take place between the mobile and fixed operators, it is still early to discuss about this when there could be serious technical difficulties facing femto cells.

A serious technical issue is interference, with femto cells interfering with each other and the macro/micro cells in the main mobile network. Simon Saunders, chairman of the Femto Forum, affirms that major femto cell developers have made their products aware of their environment and intelligent so that they do not interfere. This may be the case, but I would like to see how femto cells will interact when there are tens in the vicinity, all trying to work in the same spectrum. Another issue is whether the mobile network will be able to cope with so many distributed base stations accessing the core elements of the mobile network, including the central switches, location registers, softswitches, media gateways etc. These may have been designed to cope with hundreds of base stations in dense urban areas, but the number for base stations may escalate to several thousands if the mobile operator considers femto cells.

As far as usage is concerned, I can t see a solid scenario for femto cells. They can bring mobile wireless data to the home with the added benefit that users can access the new application with a device they are already familiar with. However, I don t see how a mobile device can compete with a PC or a notebook computer for data services most commonly accessed at home: Web, email, social networking and multimedia. Especially if mobile operators build WiFi in a femto cell box to enable computer networking, I think that fixed operators will get quite alarmed.

I can see three ways for mobile operators to bring something of interest to end users with femto cells:

  • New services: Mobile operators can release new services that can target mobile devices with very high speed connections. Intelligent architectures that distribute intelligence to the edge of the network (including IMS) are ideal for this setting but then again, user behavior is nearly impossible to predict, and deploying this kind of services would require heavy capital expenditure on behalf of the mobile operator.
  • New terminals: This is a far more radical approach. Mobile operators can promote devices with increased display and input capabilities to be used in femto cells and outdoors. This would be possible only when proof of concept has been achieved and economies of scale are in place to justify for the need to change handsets (or get an additional one).

Or they could simply add coverage where there isn t to start with and build a stable of applications after end users are familiar with cell at home solutions.

Do we really need femto cells ?
Femto cells may be a good thing. After all, distributed is the way to go forward: FON and Meraki enjoy success with little overhead costs compared to traditional network providers by giving more power to the end user. I am not saying that the mobile operator will give more power to the end user, but will enable more advanced applications and perhaps cheaper mobile basic services including voice and SMS at home. There is a lot of work to be done to make sure that:

  • Fragmentation is managed and technical issues are resolved (e.g. Nokia Siemens has released a femto gateway that speaks to other vendor femto cells via a proprietary interface).
  • Operators market (and subsidize) the devices very carefully
  • Mobile operators should work with fixed operators to setup some form of cooperation to enable femto cells, or assess whether they should offer fixed services themselves.
  • Educate end users that health risks are minimal (as with guideline compliant macro/micro cells)

However, as it stands (and in the short term future) I wouldn t pay anything to have a femto cell at home, when I can enjoy voice calls through circuit-switched (or VoIP) practically free and have a very fast broadband connection with WiFi.

Would you?