Nokia: ST-Ericsson, Qualcomm, Broadcom…bye bye Texas Instrument, and hello to the new Nokia!

[Following on from three hardware related Nokia press releases, guest blogger Thomas Menguy discusses how these announcements fit within the new Nokia strategy]

MWC is in full PR mode at the moment.. and the following three announcements from Nokia  show how the game is changing in Finland.

Nokia selects Broadcom as a next generation 3G chipset supplier:

“Today’s announcement with Broadcom is a further example of Nokia’s commitment to our diversified, multi-supplier chipset strategy,” said Kai Oistamo, Executive Vice President, Devices, Nokia. “This agreement, which targets low cost, high volume markets, demonstrates that we view Broadcom as a reliable supplier to bring the benefits of 3G to Nokia customers around the world.”…

Then Nokia’s selection of the ST-Ericsson platform for Symbian/S60 phones:

…Nokia and ST-Ericsson announced they are co-operating to provide the Symbian Foundation with a reference platform based on ST-Ericsson’s U8500 single chip…

and finally how Nokia and Qualcomm plan to develop advanced mobile devices:

…Nokia and Qualcomm Incorporated (Nasdaq: QCOM) today announced that the two companies are planning to work together to develop advanced UMTS mobile devices, initially for North America. The companies intend for the devices to be based on S60 software on Symbian OS, the world’s most used software for smartphones, and leverage Qualcomm’s advanced Mobile Station Modem(TM) (MSM(TM)) MSM7xxx-series and MSM8xxx-series chipsets…

What does this all mean?

For years Nokia has been relying on Texas Instruments to produce its custom 2G/2.5G/3G chipsets. Nokia was designing the core chipset and letting Texas Instruments finish the integration and physically produce the chips: Nokia has been mastering the whole hardware IP of its phones, and has not been relying on generic chipsets for the vast majority of its production, with all the margins this implies :-).

Nokia is now feeling the wind of change: from one supplier, the OEM is transitioning to three. Nokia has licensed its 3G hardware IP to ST (and presumably to Broadcom, rumors mentioned Infineon also), and will also use some “generic” chipsets.

Texas Instruments has really missed the ball here, by stopping 3G investment (well they have made some, but failed to deliver), and being mostly ruled by business guys with no technical vision of where the market is going: How can a company with 70% of the billion units chipset market leave the market completely in such a short amount of time? Nokia diversification is part of the equation, for sure.

And Nokia really seems to be shifting their focus: relaxing their efforts on the chipset front, not simply to cut internal costs but to invest, and my guess (as everyone else 🙂 is of course on Ovi, services, etc.

PR after PR, announcements after announcement, product after product, Nokia is showing how serious it is about reinventing itself again. It won’t happen overnight, but it is coming, and it may be a game changer indeed.

– Thomas

Cloud Computing anyone?

[Cloud Computing is the new buzzword, blogger Thomas Menguy tries to decipher its underlying concepts, the main actors, the business models and the implications for the industry ].

Cloud Computing is everywhere, and begins to look like the next big thing. But the term seems to regroup a plethora of new and old concepts with no clear consensus about it: everybody seems to understand what it is but when asked, having a clear definition is not so easy (I know, I’ve tried recently…and miserably failed 🙂 ). Here is my attempt to give it some sense.


I’ll begin with some quotes grabbed from this nice video from the web2.0 expo

Everything that we think of as a computer today is really just just a device that connect to the big computer we are all collectively building…Cloud computing : how computing services will be delivered in the future

Tim O’Reilly

Chance for developer to no worry about “things” …business concerns, scaling concerns

Matt Mullenweg (WordPress Co-founder)

A way to deliver services rather than applications completely independent of platform completely independent of physical hardware and I hope it works.

Vamshi Krishna Mokshagundam

Ok, so to sum up those gurus’ words, cloud computing seems to be about:

  • Software Services deployment
  • Transparent scaling of those services
  • Reliability (no down time worry)
  • Monetization handling
  • Decorrelate the software from the physical hardware it is running on

After this helicopter view, we can try to be a little be more educated, reading this excellent article from ExplainingComputers about the cloud may help:

It describes a very good metaphor for all this cloud stuff:

In his book The Big Switch, Nicholas Carr compares the growth of cloud computing to the development of the electricity network around a century ago. Before that time businesses had to generate their own power and therefore had to choose their location based on the available means of generation, such as moving water to drive a wheel or a supply of coal. However, with the availability of a reliable electricity grid to which they could connect, firms were increasingly freed from such constraints to focus on the other aspects of their business.

In exactly the same manner we are today just about entering an age in which both individuals and organizations will be able to dispense with a large home computer or corporate data centre, and instead connect far leaner computing devices to cloud computing resources that will fuel their information processing requirements. It is therefore hardly surprising that cloud computing is also being referred to as “grid computing” or “utility computing”

ExplainingComputers about the cloud:

What a paradigm shift! Computing power data storage and services will soon be outsourced to 3rd parties.

Now getting back to the industry, Cloud computing seems to be the sum of two concepts

Software as a Service, or SaaS, perhaps you know it under another name : web 2.0

It can be described as desktop like application accessed within the browser (or a  RDA technology like AIR) and where the storage/processing is on dedicated servers.

Those services can be free or not, here are some notable examples:

  • : CRM for marketing/sales, per user monthly fee (9$ to 65$ a month)
  • The excellent free for personal use then few bucks per month/per user for business
  • : project management software, per user monthly fee (around 20$ to 40$)
  • Even IBM is going this route with a kind of hosted Lotus service (I can’t get prices…)
  • Of course : to store/share/edit office documents, free but has a paid version for enterprise. Of course Gmail is there also as Google web album (price depend on storage)
  • Adobe plays the game with a kind of “online” Photoshop elements to store share and edit your personal photos, free for simple use, from 19$ to 129$ a year to grow the storage, different services are proposed if you already own Photoshop elements or premiere elements. Adobe also provides an office online collaborative suite: free to use, but acrobat desktop is heavily advertized across the tool.
  • Apple MobileMe for photos, mail, events contact calendar shared between desktop and mobile (iphone) 99$ a year.
  • Microsoft answer to Apple: SkyBox/SkyLine/SkyMarket (MobileMe+Appstore for WinMob). Microsoft has also some offers, around Microsoft live,, and some plan for hosted exchange services, I don’t have any price point to compare it to “standard” Exchange installations

Of course I forget a lot of others, like Flickr, yahoo! services, etc.

All those services have in common:

  • Ease of use, not only for the service itself, but also for billing, maintenance, installation, deployment, etc.
  • Affordable, price depending on storage/number of user/services accessed
  • Neat and modern UIs
  • Packaged and well defined services

This is this last point that led some of those providers to open their infrastructures, putting in place the Next Big Thing :

Hardware as a Service, HaaS

Those SaaS providers have grown their infrastructure  to support scaling and reliability for their services…the next step is to open it and monetize it.

So here is HaaS where the business model is simply to sell some RAM/CPU/Storage/Bandwidth/some services according to the needs of the customer.

  • The real first One: Amazon EC2, part of Amazon Web Service (AWS) platform. A way to deploy and scale a web application, paying only for the resources it actually uses (prices are around 0.10$ to 0.80$ of cpu/hour, 0.10$ per GB transferred, 0.15$ per GB stored per month, 0.01$ per 1000/10000 PUT/GET requests).(side note: Adobe proposes LiveCycle ES on Amazon Cloud).  Amazon describes its solution as:
    • Elastic: user can increase or decrease their hardware requirements within minutes
    • Flexible: user can choose specification of each individual instance of computer power purchased
    • Inexpensive: no dedicated capital investment required
    • Reliable: make use of Amazon proven datacenter and network infrastructure.
  • Google of course is there (do your self a favor and read this about the AMAZING Google infrastructure) with Google App Engine , free for now but fairly limited
  • Little actors like MossoGoGrid or 3tera are popping out on the same kind of technology
  • IBM is jumping also with Blue Cloud
  • HP, Intel, Yahoo join forces on cloud computing research
  • For me Facebook is part of the game: easy way to deploy and monetize (?) social applications. Ning is another example (for social networks)
  • And of course Microsoft with Azure:

Azure seems to be really complete with a new OS, great marketing materials etc…but as always with MS not really available yet. Business model is again identical: you pay what you use as resources.


See above a schema about those technologies. What is emerging is a new kind of OS capable to handle

  • faulty hardware,
  • load balancing,
  • heavy multiprocessing and parallelization,
  • virtualization technologies are key here (at least I understand the market cap of VMWare now!) ,
  • advanced storage technologies and databases.

Google has built its own stuff (the three core elements of Google’s software: GFS, the Google File System, BigTable, and the MapReduce algorithm), Microsoft too (and present Azure as it is : a new OS), Amazon, Yahoo and others are using some Open-Source initiatives like .

A nice summary of what we can do with cloud computing, from the Yahoo white paper:

What does it take to get the Next Great Thing off the


  • Set up multiple replicas of a clustered data store
  • Set up a system for indexing
  • Set up a system for caching
  • Set up auxiliary DBMS instances for reporting, etc.
  • Set up the feeds and messaging between them
  • Write the application logic
  • Fairly complex system at first line of new code

Our vision:

  • Write the application logic
  • Use a hosted infrastructure to store and query your data

=> Or, as Joshua Shachter puts it: “The next cool thing shouldn’t take a team
of 30, it should be three guys, PHP and a long weekend”

Yahoo white paper

This is all well and good but where is the catch?

Many aspects are slowing this IT revolution

  • Concerns around privacy and collusion: giving all my (as a company) data AND processing of my critical business to Amazon and Google may lead to collusion, Google is no more the “don’t be evil” it may have been, nor Microsoft or Amazon…Or even worse if I am a service provider new entrant (hum say like Nokia with Ovi for example), I just can’t use Google Infrastructure for that! How can I trust Google about my competing usage of its own resources to deliver a service …that competes with Google own ones?
  • Concerns about stability. Most cloud vendors today do not provide availability assurances. This is particularly an issue with Mashups that need a set of web services hosted in various cloud computing environments, and many may stop working at any time. Seeing the MobileMe launch fiasco, Apple learnt how difficult it is!
  •  Concerns around security. The old dilemma: “should I put my money in a Bank or in my own building” …we all know the right answer now.
  • Regulation issues:  For Example in Europe, some countries require services and/or customer data be retained within a country’s borders.
  • This is new technology: even if simple, there is a learning
  • IT service may feel threatened: after all the tedious tasks of updating, backup, hardware handling are now externalized…

One key point seems to be that to be trusted cloud computing providers have to stop offering their own services and focus ONLY on providing a compelling and efficient cloud platform.

Where is the Mobile industry: client side?

As said by Tim O’Reilly in the first quote, ALL the devices are morphing to cloud access points, phones are on their way, MID and Netbooks are just showing it more clearly.

The iPhone is the first real device to access the cloud effectively, and what is really interesting about it is that the browser is not the preferred choice to access the cloud: the vast majority of non-game iPhone applications are simply optimized front-end to a dedicated  SaaS! I predict the same for Android Marketplace…and many software actors will pop out  around this cloud interaction.

Nokia is morphing into a cloud computing provider …but doing the whole stuff alone: Ovi being the infrastructure AND the service, and Nokia devices nice cloud front-end.

Time will tell if an actor alone can handle those three aspects, Google, Microsoft and Apple are also trying…

Where is the Mobile industry: server side?

Doing this overview I was really surprised to not see the “natural” actors of this new paradigm:

  • Who has a BIG infrastructure?
  • Who can link this infrastructure to the final devices/customer?
  • Who is deploying complex services to million of customers for decades?
  • Who handles directly the customer billing?

….hum you guessed it : our beloved CARRIERS!

Cloud computing would be a fantastic way for them to not fall in the dumb pipe category. Let’s face it, developing services has to be done by service providers, not operators (who wants to use its operator IM or mail? social network? photo sharing?) .

If carriers were able to leverage their fantastic cloud computing capabilities, they may stop developing sure-to-fail-services and monetize their pipe not only to the final customer but also smartly from the service provider ( NaaS seems to be a first attempt but I still don’t understand the business model). Perhaps a bold statement, I would be more than happy to have some carrier comments on this one!

Looking forward to your comments.


Adobe Mobile Packager: are runtimes still important or are development environments and tools taking over?

[Adobe just released and new way to package Flash Lite Applications for S60 and WindowsMobile: this announcement, if linked to the announced Google native client, the Adobe Alchemy product and other industry initiatives is an indication of where the desktop and mobile development are going. Blogger Thomas Menguy tries to bring some coherence to these seemingly uncorrelated initiatives].

tower of babel

At one time application developers were targeting OSes: Windows, MacOS, Unix.

At one point the target began to move towards runtimes (or Application Environments as discussed in this earlier article): the web browser, Flash player (inside the web browser), Java VMs, .NET, and more recently Java FX, Silverlight, AIR…

In all cases each runtime is imposing its own development environment, tools, SDK and above all a development language (Java for the Java VMs, Action script for Flash/Air, Javascript for the web browser, C# for .net).

And of course the runtime has to be installed on your final target, BEFORE deploying your application or content.

But the lines between tools, languages and runtimes are now blurring, as evidenced by several industry moves:

  • Mobile Open OS are all offering solid and robust application and content management (the Mobile Application Store syndrome). A runtime sandboxing its dedicated content from the rest of the system is seen like an unnatural way and bad user experience for handling content.
  • Google has a framework (GWT: Google Web Toolkit) to develop for the web browser runtime … except that the development language is NOT javascript
    • You develop in Java
    • In Eclipse or NetBean
    • You can use a RAD
    • The Java code is compiled in Javascript and will run in a browser not a javaVM (except for development)
    • This brings a kind of unified approach for the client and the server
  • OpenLaszlois a great RIA development platform …without a specific runtime :
    • You develop in the OpenLazlo language : LZX, a specific XML + Javascript
    • You compile your code for flash or DHTML (a Java version exists but doesn’t seem to be supported anymore) so you can select your runtime!
  • .Net /Silverlight
    • You can choose you development language VB#, C# or action script
    • All are compiled to the .NET bytecode runtime
    • Microsoft is releasing its “Expression” line of tools to bring ease of development to the designer/developer
  • Adobe AIR
    • You can develop in Flash/Flex/Action Script or … in AJAX (Javascript)+HTML
    • The Air runtime is in fact an aggregation of a Web Runtime (Webkit) and a standalone Flash player
    • Your applications are deployed …nearly like any other application on the underlying platform. The ‘nearly’ is important because the AIR runtime installation is still visible, as is the application air packaging
    • Adobe is releasing Catalyst, a very nice WYSIWYG application prototype IDE targeted to designers with strong links to CS4
  • Google Native Plugin
    • Allows to develop and reuse C/C++ code … in the browser
    • use a raw GCC toolchain (and so the browser plugin has certainly to embed an OS independent dynamic loader…reminds me something we are doing for years at Open-Plug 🙂 )
  • Haxe:
    • An Action script like language you can compile to … php, C++, java and of course ActionScript
    • Unification of the client and server development
  • The adobe Alchemy project (for the techies, explained here):
    • Compile any C/C++ code to ActionScript byte code to be run in a flash player (examples of Doom, here,  and Quake running in Flash are now famous)
  • And the announcement triggering this analysis: Adobe Mobile Packager
    • Development in CS4, with CS3 device central
    • Flash Lite Application is packaged in a “standard” .CAB file for Windows Mobile or an .SIS file for S60, with everything needed to make your application run
    • Flash Lite applications are no more second class citizens, you don’t have to open the Flash runtime anymore to launch such applications
  • SonyEricsson Capuchin
    • … is at the end the way to package flash lite application in a java jar file.

All those examples are depicting underlying trends:

  • We see, more and more a decorrelation between the development environment and the targeted runtimes
  • Many development languages are popping out, and we won’t have a “one language fits all”: developers will tend to use
    • What they know , and it’s even easier now with all those tools
    • Reuse legacy code as much as possible
    • What fits best for a particular task
    • What can help with client/server development
  • Ease of development and tooling seems to be key, especially looking at Microsoft and Adobe strategies
  • The on device final Application Management is left to the underlying platform/OS and will be more and more abstracted for the developer that is targeting multiple platforms with a single application development environment.

From what I see today, I tend to think that Adobe is getting it right, little by little, especially thanks to their very strong tooling offer (CS4/FlexBuilder/Catalyst)…and we may see other initiatives from other players like Nokia or even Google to accelerate the development and deployment of services (web or not).

Interesting times for a developer!

Looking forward to your comments.


Capuchin: Sony Ericsson strikes back in the Application Environment…is it a strike? What does it mean for the development platforms fragmentation?

[SonyEricsson is promoting a new Application Environment mixing Java ME and Adobe Flash Lite: Capuchin. Blogger Thomas Menguy tries to describe it and evaluate what “yet a new” development platform means to the industry ].

Sony Ericsson had a nice webinar last Thursday, interesting held through Adobe E-Seminar:

“Flash Lite meets Java ME on Sony Ericsson phones with Project Capuchin”.

At least now we have some information about Capuchin, and I’ll sum it up for our beloved busy executives:

  • A technology that allows developers to make the UI using Flash Lite and code the business logic and access to the platform services with Java (ME).
  • A development environment with PC based tools (Adobe CS plugin for flash and Eclipse plugin for Java), simulators and a specific runtime embedded in SEMC phones.
  • The deployment is done using the well in place Java deployment environment (jar are used, same signature, etc).

Here is first a transcript of the capuchin webcast, then as a conclusion I’ll throw out my thoughts about this and its impact on the industry (if you are still there…).

Project Capuchin Web Cast transcript

Flash  Lite from an SEMC perspective Java ME from and SEMC perspective

  • Tools
  • Community
  • books, forums, tutorials

  • Wide platform access: JSR’s
  • Security: MIDP protection
  • Distribution infrastructure using JAR
  • Wide adoption language

  • Limited system services access
  • No security solution
  • Lack distribution channel
  • memory/cpu consumption

  • Lack of designer oriented tools
  • no rich UI framework
  • difficult to keep separation between presentation and service layer
  • Designers dependent on programmers in UI dev



Capuchin is about mixing those two worlds, and enforce UI designers and developers relationship.

Why the Capuchin name : it is a monkey like tamarin…the name of the Adobe Action Script VM.

Here is a high level architecture presentation of Capuchin:

Flash content is embedded into a .jar and can be launched by some Java code, then, thanks to the Capuchin API the Flash Action Script can access the various JSR or any other Java class of the project.


Here is below how an accelerometer API may be available in the Flash Action Script of a Capuchin Application:

The Capuchin API works both way: flash to java and java to flash.

What Capuchin is bringing:

Flash development:

  • Extend current limited APIs with the use of JSR
  • Secure Flash application
  • Deploy flash as java games, distribute Flash content through existing java distribution infrastructures

Java Development

  • Clear separation between business code and UI
  • Nice development tools
  • Professional UI tools

How to use Capuchin:

3 main ways to do:

  1. Packaging pure Flash Lite content using jar
  2. Java Midlet using Flash Lite for the UI layer
  3. Java Midlet using Flash Lite for PARTS OF THE UI

Adobe has a nice technology: mxp, format for packaging extensions. Capuchin use mxp to package the APIs that will be mapped into the Action Script.

There is an Eclipse Capuchin Plugin to create those APIs declaration (see above) as they will be usable in the Action Script written in CS3. This tool outputs an XML file which will be used to output Java Classes for the java part to be implemented …. and Action Script classes to be used in CS3.

Everything is then packaged in a .mxp installation package. SEMC will provide some mxp already (Bluetooth , others…)

Demo time:

The webcast then featured a demo:

swf2jar :

Goal here was to show the tool to convert a swf to a jar, swf2jar: very useful for packaging because a flash game today…end up in the image folder when deployed in a SEMC phone :-)….

Calendar component:

Project Capuchin plugin for CS3, with mxp packages. The intent here was to show how to use Java services in a Flash Lite content

There are some “Platform components” in the library:  in the AS editor, it is possible to import for example the package com.sonyericsson.capuchin.calendar.Calendar

… to import the Platform classes,  so it is now possible to use the Calendar class as a normal Action Script, even if it is a Java Service.


One word about the toolchain future:

In gray: Not done today:

  • Flash Emulator will be connected to Eclipse to use java services directly and not only stubs
  • UI library: a flash widget library will be developed
  • Connect everything to the existing SEMC phone emulator
  • Work with adobe so that in device central, when a SEMC phone is selected the list of available mxp would be provided.

What will be published soon:

  • First phone : C905, compatible with capuchin APIs
  • Capuchin APIs,Java Classes
  • swf2jar tool
  • Capuchin API generator , eclipse plugin
  • mxp packages with source code
  • capuchin test and video tutorials
  • demos applications

=>check here

(final: October)

SEMC Capuchin will be present at Adobe MAX in San Francisco and in Italy in December!

Some Q&A with no major questions…mine were not answered:

  • What is the implication of Adobe in this project?
  • What is the implication of Esmertec in this project?
  • Is there a roadmap to have Capuchin on other platform than SEMC ones?


Some points about this initiative:


  • SEMC has already done a large part of their applications in their feature phones in Java, and they have a strong Java commitment with Esmertec, so on SEMC phones Java is the preferred development method internally….and now with Capuchin, externally as nearly all the platform services are already available in Java.
  • With the point above, and knowing that some part of the SEMC feature phones themes are already in Flash, merging Flash Lite and Java was a natural choice for SEMC


  • Flash Lite choice is the only one possible for today mobile phones (CPU/Memory)…but is really not a complete and efficient UI application frameworks, it lacks …widgets! So SEMC plan to develop some new ones, hum wait, Adobe Flex is not about that? Bringing application development to Flash?


  • Not sure about the porting of such a technology on other platforms than SEMC … but from my knowledge only another one has made the Java choice: Google Android where all the platform services can be accessed through Java, but I don’t see any incentive for SEMC to port it to Android



So we have a new Application Environment, with its own SDK, that will certainly be only available on SEMC platforms….Capuchin one will complete this never ending list:

  • iPhone native/iPhone SDK
  • iPhone Web Apps
  • S60
  • Nokia Qt
  • UIQ (oups, RIP)
  • LIMO
  • Maemo
  • Motorolla WebUI
  • Android
  • J2ME (and all its flavors …)
  • Capuchin
  • Flash Lite
  • Flash/Flex/Air
  • Brew
  • WinMob
  • PalmOS
  • BlackBerry
  • …and so on…


Are we still talking about cross platform development? About consolidation and standardization?  NO

The industry is pushing the other way, and really this is NOT AN ISSUE.

Services and applications developers have learnt how to reuse code across platforms, how to architect their code and services so that it is easy to change only the presentation and the adaptation to the platform: after all developing a UI for a 800*480 screen and a 176*220 is just something completely different, and really not a big deal if your UI is uncorrelated from your services; Capuchin helps that, as many other technologies.

All those new Application Environments are bringing to Mobile Platforms  great core value for services deployment :

  • openness
  • great tools, ease of development
  • focus on user experience and UI
  • deployment/packaging/distribution strategies
  • security

And we don’t want a “one size fits all” environment, it is simply not true in an industry where the forms factors, capabilities and designs are so vastly different. Differentiation is key in this market, just open the platforms with nice and open development technologies, it is enough!

One big trend we can foresee also is that the platform vendors have no more software complex, and when you look at the list above, nearly all the initiative are coming from OEM, and not really from pure software companies (notable exceptions: Android and WinMob)….PC based paradigm seems soooo far away!

Intel buying OpenedHand: Yet another platform? Or the rise of a credible mobile alternative?

[Intel is moving fast toward MID: Mobile Internet Devices, and just bought an open-source mobile centric company: OpenedHand… blogger Thomas Menguy looks at the current Intel strategy to establish a share in the mobile market].

For years Intel has repeatedly failed to get a piece of the Mobile phone 1 billion devices a year cake.  The latest known attempt was the infamous XScale processor.. too big, too slow (albeit a high MHz count) for the smartphone application processor market, which has been trounced by the usual suspects ARM based manufacturers (TI, Samsung, …).

Yet Intel is coming back to its roots: x86. And their weapon is the ATOM processor.

At first it was designed to be a very power and simple x86 core to be used in multi core processors (with a lot of core) … but its strength was fully applicable to a nascent market: the UMPC. And from a UMPC to the MID ( like the Nokia 770/800/810 Tablets)  there’s not a lot of differences. Anyway ATOM is not competing against archrival AMD, but… with ARM manufacturers (Nokia Tablet, ipod Touch are ARM based), with an important edge: not so because of the performance (even if it is faster), but because it can run windows! It’s a full x86 chip. Ok, power consumption is still faaaaar from an ARM based system, but Moorestown will lower this barrier:

Intel has publicly committed that Moorestown will have at least 10 times less idle power consumption than the previous-generation Menlow platform.   (source)

Even if running windows may help convince some manufacturers and users, there is currently a trend for “exotic” software platforms that are well, simply doing their job. An MID is NOT a generic PC: Nokia Tablet OS, MacOS X mobile ( ipod Touch/iPhone), Linux based UMPCs, Samsung latest smartphones…upcoming Android and Limo…are all “windows” decomplexed interesting platforms. Intel decided to become more than a silicon vendor: they want to go the system provider route, and for that of course they need their very own software platform (yes a new one…):


The video above is simply a mock up of what it would look like….This software platform is called Moblin. Moblin 1.0 is (was) a sister project of Nokia Maemo (foundation of Nokia Tablet OS): same Application Framework (Hildon), nearly the same API’s, same UI framework. However, according to Intel’s own words:

Moblin has “failed to generate much interest” among developers. “Moblin 1.0 wasn’t successful in creating this community push,” Hohndel (Intel’s Dirk Hohndel, director of Linux and open-source strategy,) was quoted as saying. “Having a vibrant community push is the winning factor.”   (source

So Intel needs a differentiator: Intel and its OEMs will now compete with Nokia, Android, Apple… Intel needs some fancier software, so here it is: Moblin 2.0 – still Linux based for the lower layers, but with a new graphical interface based on Clutter and Compiz. Clutter is a “modern” (ok still some glib ugliness in it) 2.5D widget framework, and Compiz a very nice 3D window manager, both based on OpenGL (ES). Here is an example of a Moblin Clutter application:


Around this Intel is planning a lot of services and Applications, like the Contact Epicenter, or a Mozilla based browser, Fennec (incidentally same choice as Nokia for its tablets…all the other platforms being webkit based). And with the announcement of Intel acquiring OpenedHand, the company inherits all of OpenedHand’s projects:

  • Clutter : You know it now
  • gUPnP : UPnP library
  • Matchbox : Window Manager + application used….in Nokia Tablet, OLPC and OpenMoko!
  • Pimlico : set of Mobile PIM Applications
  • Poky : An open source software development environment for the creation of Linux devices

So basically OpenedHand brings to Intel some key pieces for its platform, especially Clutter ….and the tools ALL the Linux vendor are missing: a Platform Builder to help OEMs put their platform in place! (Only Microsoft has it with the Windows Platform Builder, designed to adapt WinCE and WinMo to various hardware platforms, bring the necessary modules together, etc.). But perhaps the key OpenedHand assets for Intel are the people behind OpenedHand; Kudos to them to be there since 2000, and now part of Intel! Intel is serious about this platform, beware Symbian, Limo, WinCE, MacOSX mobile and Android, here is a new credible platform to look at!….Anyway Intel is  first a silicon fab, down to its DNA, so the open points will be:

  • Is Intel able to commit long time efforts to software?
  • How about software support to its OEMs?
  • And the most important point: Is Intel able to design a software platform with a great user experience? .

The last point is crucial; WinMo and Symbian have failed in this regard, even if they have been designed by software companies. Putting open source technologies together is really not enough to make a consumer product.. I’m eager to see if Intel has, or is hiring some usability and design experts (and not only software engineers). Anyway having a new credible, deep pocket actor in the industry is always good news… and with the gap from MID to smartphone really blurring, we may expect some great devices!

UI Technologies are trendy…but what are they really good for?

[UI development flow and actors: Graphical Designer, Interaction Designer, Software Engineer, classical technologies: GTK, Qt, next generation: Flex, Silverlight, WPF, TAT, XUL, SVG… guest blogger Thomas Menguy describes what are the main concepts behind all the UI technologies, what the new generation ones have in common, what those modern approaches are bringing to the product development flow…and what is missing for the mobile space].

A good UI is nothing without talented graphical designers and interaction designers: How the plethora of new UI technologies are helping unleashing their creativity? What are the main concepts behind those technologies?Let’s try to find out!

UI is trendy… thank you MacOS X, Vista and iPhone!







Put the designers in the application development driver seat!

Here is a little slide about the actors involved in UI design


UI flow actors and their expertize

What does it mean?

Different actors, different knowledge …. So different technologies and different tools!

Those three roles can be clearly separated only if the UI technology allows it. This is clearly not the case in today mainstream UI technologies where the software engineer is in charge of implementing the UI and the service part, most of the time in C/C++ , based on specifications (word document, Photoshop images, sometime adobe flash prototypes), that are subject to interpretation.

  • The technologies used by the designers have nothing in common with the one used to do the actual UI.

  • The technologies that allow UI implementation…require an heavy engineering knowledge.

  • Big consequence: the software engineer decides at the end!

The picture is different for web technologies where it has been crucial and mandatory to keep strongly uncorrelated the service backend from its representation : Web browsers have different API and behavior, backend have to be accessed by many other way than web representation…and above all data is remote and presentation is “half local/half remote”.

Separating representation, interaction and data has been the holly grail of applications and services development for years. It has been formalized through a well known pattern (or even paradigm in that case) : MVC (Model View Controller)


MVC pattern / source: wikipedia
From wikipedia:
The domain-specific representation of the information on which the application operates. Domain logic adds meaning to raw data (e.g., calculating if today is the user’s birthday, or the totals, taxes, and shipping charges for shopping cart items).
Many applications use a persistent storage mechanism (such as a database) to store data. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the Model.
Renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes.
Processes and responds to events, typically user actions, and may invoke changes on the model.

All the UI technologies are offering a way to handle those 3 aspects and, as a consequence, are providing a programming model defining how information and events flow is handled through the MVC.

See below a simple schema I’ve made describing a GTK application: when you look at an application screen, it is made of graphical elements like buttons, lists, images, text labels, called widgets (or controls) .

Rmk: the term “widget” is used with its literal meaning : “window-gadget”, this term is now used a lot in web 2.0 marketing terminology and by Yahoo/Google/MS to represent a “mini application” that can be put on a web page or run through an engine on a desktop PC or a mobile phone, to avoid confusion I prefer the term of “control” over widget for the UI technologies, but will continue using “widget” in the rest of the GTK example as it is the term used by GTK itself.
Widgets are organized hierarchically in a tree, meaning that a widget can contain other widgets, for example a list can contain images or text labels. In the example below the “root” widget is called a “Window”, it contains a kind of canvas which itself contains a status bar, a title bar, a list and a softbutton bar. Then the list contains items, the title bar has a Label, the softbutton bar contains some buttons and so on.

A widget is responsible for

  • Its own drawing using a low level rendering engine, called GDK in the GTK case (GDK offers API like draw_image, draw_text, etc).
  • Computing its size according to its own nature (like the size of the text that will be displayed for example) and the size of its sons.
  • Reacting to some events and emiting some specific ones: the button will emit a “press event” when it is pressed with the touchscreen or when its associated keypad key is pressed.

The widget tree will propagate system events (keypad/touchscreen, etc) and internal events (redraw, size change, etc) through the widgets. The developer will register callbacks (in fact functions, piece of code implementing a functionality) that will be called when widgets will fire events (like the “press event”) .


GTK Widget tree structure: a phone screen example

The major GTK/gLib formalism is how those events/callback are handled: through what is called a “gloop” where all events are posted in the loop queue, dequeued one by one and “executed” in this loop, meaning their associated user callbacks will be called. This loop is running in one thread. This is what we call a programming model. In nearly all the UI technologies such a loop exists with various formalisms for the queue handling, event representation, etc.

To finish with the above schema the user callback will then access to the middleware services, the various databases and so on.

There is no clear MVC formalism in that case, the controller is mixed with the view …and even the model that is mixed … with the widgets! (so with the view)

Qt Model is really identical to the this one.

One last point very relevant for application development and design: the notion of states. Each application is in fact a state machine displaying screens linked by transitions, like in the example below where in the state 1 the user decides to write an SMS, it will open an SMS editor screen and clicking send will go to a selection of phone numbers.


Application State Machine: write sms example

Here is an attempt to formalize a modern UI framework with Data binding (for Model abstraction).


UI engines formalization
Control: equivalent to a widget but where the MVC model is fully split. A Data Model as to be associated alongside with a Renderer to make it usable.
Control Tree: equivalent to the widget tree: aggregation of Controls, association of the controls with a Renderer and a Data Model. Possibly specification of Event Handlers.
Data Model: Object defining (and containing when instantiated) a set of strongly defined and typed data that can be associated with a Control instance.
Data Binding: Service used to populate a Data Model.
Control Renderer: Object that is able to graphically represent a Control associated with a Data Model, using services from a Rendering Library.
Rendering Library: Set of graphical primitives, animations, etc.
Event Handling (and Event Handler): code (any language) reacting to events and modifying the current state machine, the Control Tree, etc.
Standardized Services: Interfaces defined to access middleware directly from the event handling code.
Server Abstraction: Possibility to transparently use Data Binding or any service call locally or remotely.

Ok if you are still there, and your brain is still functional, here is what’s happening today in this area….

In traditional UI frameworks like GTK, Qt, win32, etc the control tree description is done with a C/C++ description … a little niche technology have paved another way: it is called HTML: after all an HTML web page description is defining a tree of controls, W3C use a pedantic term for it : the DOM tree. JavaScript callbacks are then attached to those widget to allow user interaction. It is why all the new UI technologies are based on an XML description for this tree, it is muuuuuch more easier to use, and allow a quicker description of the controls, and above all it allows nice design tools to manipulate the UI….Apart from this XML representation the majority of the UI technologies are coming with:

  • An animation model, allowing smooth transitions, popularized by the iphone UI, but it was already there in MXML (Adobe Flex Format), XAML (MS format), SVG, TAT offer….
  • Modern rendering engines (Flash for Flex, MS has one, TAT Kastor).
  • Nice UI tools for quick implementation: Adobe Flex Builder, MS Expression line, TAT Cascades, Digital Airways Kide, Ikivo SVG…
  • In many case : a runtime, to be able to run a scripting language.

Here are some quick tables, really not complete, of some of the most relevant UI technologies in the PC and mobile phone space.

Just to explain the columns:

  • RIA : Rich Internet Application, delivered through a browser plugin
  • RDA : Rich Desktop Application: delivered through a desktop runtime
  • Runtime: ok, galvoded name here, just a name to represent the piece of technology that allows the UI to run
  • UI: Technology to describe the control tree (you know what it means now!)
  • Event Handling: the dynamic UI part, and how to code it (which languages)
  • Tools: UI tools






Embedded rich UI technologies

So its time to answer the main point of this post: How those technologies are helping unleashing designers creativity? By defining a new development flow, allowing each actors to have a different role.

Here is an Adobe Flex “standard development flow:


Adobe Flex&Air tools flow

In the next schema I try to depict a more complete Adobe Flex flow, adapted to the mobile world, where, for me, a central piece is missing today: it is not possible now to expand “natively” the adobe air engine, this is mandatory for mobile platform with very specific hardware, middleware, form factors.

So I take the adobe flow more as an example to demonstrate how it should work, than as the paradigm of the best UI flow because this is not the case today (same remarks for MS flow, less true for TAT for example)




An Adobe UI design flow for embedded

This shows clearly that the “creativity” phases are clearly uncorrelated: different tools are used between the designers, they can do a lot of iterations together, without any need of the software engineer. This one can focus on implementing the services needed by the UI, optimizing its platform, adding middleware features.

  1. Interaction Designer defines the application high level views and rough flow
  2. Graphical Designer “draws those first screens”
  3. Interaction Designer import it through Thermo
  4. Graphical Designer designs all the Application graphical Assets
  5. Interaction Designer rationalizes and formalize what kind of data, events and high level services the application needs
  6. Interaction Designer & Software Engineer are working together on the above aspects HINT: A FORMALIZM IS MISSING HERE once done:
  7. Software Engineer prepares all the event, data services, test it unitarily, in brief: prepare the native platform
  8. Interaction Designer continues working on the application flows and events, trying new stuffs, experimenting with the Graphic Designer based on the formalism agreed with the Software Engineer.
  9. Once done … application is delivered to the Software Engineer that will perform target integration, optimization…and perhaps (hum certainly) some round trip with the other actors 🙂

So this is it! this flows really focus on giving power to the designers…taking it from the engineer hands. Some technologies are also missing to really offer a full Mobile Phone solution:

  • All the PC technologies are about building ONE application and not a whole system with strong interaction between application. With today technologies, the designers are missing this part….leaving it to the engineer: How to cleanly do animation between applications?
  • Strong theming and customization needed for:
    • product variant management: operator variants, product variants (with different screen sizes and button layouts for example), language variant (in many phones 60 languages has to be supported, but in separated language packs).
    • A not well known one: Factory line fast flashing of those variants. It is very long to flash the whole software of a mobile phone while on the factory line … so if you are able to have a big common part and a “customization” part as little as possible but with the full UI…you gain productivity…and big money 🙂
  • Adapted preset of widget or controls (try to do a phone with WPF or Flex…all the mandatory widgets are missing)

Anyway an UI technology is only the way to interact with a user…to offer him a service. Most of the technologies presented above are about service delivery and not only UI…My next post will be about this notion of service delivery platforms.

[Update] Replaced the term “ergonomics specialist” by “Interaction Designer”, thanks Barbara, see comments below.

Thomas Menguy

Nokia to acquire Trolltech! Trying to guess why….

Big news in our world! Check the press release.

Price (the offer values the company to 100 Millions Euros) is not so high compared to Trolltech technical and community assets (but high …looking at the actual company revenues of 22 Millions Euros). This is not a dot com acquisition. Period.

The next game would be to understand why.

Trolltech is providing a native development environment called Qt, which is a set of “OS services” (memory management, Thread, etc…) and is famous widget library. This environment has been ported on Desktop Linux, Windows, MacOS and embedded Linux: “Qt/embedded”, now called Qtopia Core , on top of which a nearly complete phone application stack has been built, Qtopia.

The framework allows C++ development but recently a java version surfaced: Qt Jambi

Trolltech provides also (and sell) some development tools: a RAD, QtDesigner, qMake a command line tool chain, a plugin for Visual Studio and some internationalization utilities.

While huge adoption in the mobile phone market remains to be seen, Qt is at the earth of one of the biggest OpenSource piece of sotfware: the KDE Linux Desktop (…father project of the now famous webkit browser engine).

So crossing with Nokia current strategy and ths interesting quote from the Nokia PR:

“Trolltech’s deep understanding of open source software and its strong technology assets will enable both Nokia and others to innovate on our device platforms while reducing time-to-market. This acquisition will also further increase the competitiveness of S60 and Series 40.”

Kai Öistämö , Executive Vice President, Devices, Nokia

Here are the different bets:

  • It is widely known that the proprietary S40 is difficult to maintain and extend/modernize, porting Qt as a companion framework may allow Nokia to open it’s most widely used platform (S60 is negligeable compared to S40 market share) to third party developpers … and open source developpers.
  • Nokia wants to have cross platform technologies to merge S60/S40 and desktop environment, so take advantage of the HUGE Qt developper pool.
  • Nokia desperately needs a credible platform and a set of APIs to counter Android in the web services area…and the Java Qt makes sense here.
  • Does Nokia has some ambitions for KDE to use it as its base OS for its forthcoming “Personal Computer”, touted as the next big thing and next strategy of Nokia?

Be prepared for a S40, a S60 Qt port …. and perhaps an opening of the S40 platform, at least for selected third parties.

Anyway I quite don’t get this …

  • In Hildon regards, the Maemo Tablet OS running on the nokia Internet Tablet (n770, N800 and N810). This one is based on GTK, the Qt archrival on the Linux Desktop, uses a Mozilla based browser, so is in the opposite technical direction: will it be cancelled as it is to run a Qt based Tablet OS?
  • For KDE Desktop: Dealing with a little company like Trolltech is something, having Nokia as the main backer of its framework is something else. How the OpenSource community will react?

What do you think Nokia has in mind?


While you were out.. the mobile internet took off

[guest blogger Thomas Menguy praises the virtues of web applications on the iPhone.. and realises how the mobile internet has already taken off]

Ok I admit, I have an iPhone. I love it, blablabla you know the story already. It has its flaws, but as an old time mobile software engineer I’m really stroked by one BIG fact: The applications I use the most on it are fully web based!: My IM messenger (JiveTalk), my english/french dictionary (Ultralingua), my mail and rss reader (special version of gmail and google reader) … even my all time favorite mobile game Bejeweled is web based!

What a shock.. I wasn’t prepared for that: when Steve Jobs told us that the only way to add application will be (at first) through the web browser I was the first to laugh, only raw C++ is meaningful for applications, a web browser is a mere toy compared to a real application framework.

How wrong I was. And here is why. (and no it won’t be only about the iPhone)

  • Unlimited and affordable data plan, and efficient bandwidth and coverage: I’m in Europe (France) and here network coverage and edge (2.5G) are very efficient.
  • Webkit and Mozilla : Webkit engine tends to begin the defacto mobile web browser (check what pleyo is doing) embedded in S60, MacOS, Android…the only other credible contender is the Nokia Mozilla version (my Nokia N800 is simply unbeatable for web browsing).
  • Raise of ad-hoc web services framework: the famous and numerous web widget frameworks (webwag being one to be noticed), and Yahoo GO for example.
  • And the biggest one which is vastly under looked: modern websites, sorry webservices, are fully Model/View/Controller (ruby on rails, but above all struts2, etc.) what does it means in human readable language? : it is VERY easy to adapt the content/services of a web site to different browsers / way of presenting data. Look at the plethora of “iPhone” optimized sites (ebay, dailymotion, facebook, etc) that have popped up everywhere in few months.

Those approaches have something in common

  • Need of a reliable wireless data link
  • Well architectured network backend to provide optimized business data and adapted rendering data (the last one is not mandatory, check RSS for example were the business data has no notion of representation in it).
  • An “On Client” web service framework: a browser with standard and added proprietary APIs like the iPhone Safari, a limited and fully proprietary engine like Yahoo Go!, or a full OS with the complete stack like Android and … the iPhone OS (OK, don’t forget the “old” high level OSes like S60 and WinMobile).

Everything seems to be in place, and from what we saw above a good web service client platform would have to:

  1. Be fun to use and compelling, tailored for each user
  2. Be VERY efficient for the phone common tasks (phone call, address book)
  3. Offer a nice and easy way to deploy data representation and flow control from existing web services backends…with good performance and relatively wide access to the underlying platform and datas

For me the first two doesn’t have to be understated (just try a WinMobile phone for a few months to understand what I mean 🙂 ), as the device remains a phone, a communication machine and voice is still the undefeated champion for communication. This is where the iPhone is groundbreaking at a first sight…and also where I’m not sure of what Google Android will deliver (call me skeptical if you want…).

The third point may bring a lot of optimism … as it implies that we don’t need a single platform anymore, but a bunch of deployment possibilities, tailored for each device/client or even each service. Android and the iPhone may be seen as such a platform with at least two of those deployment possibilities: the browser and application native development, here Android is much more friendly to Java/Web programmer that the iPhone. But we could perfectly imagine devices with more deployment options or other completely different but close enough to web development standards to allow fast adaptation of web backends….why not an iPhone with an Android sandbox?

At the end the famous “cloud” (the network) is really shaping the “on device” clients, allowing more and more diversity and at there won’t be a “one fit all” solution…

Thanks Steve Jobs for being the first to have put in place all the elements of the chain, dealing with carriers, content provider, services providers…and coming with a great consumer electronic design.

Google wants to go further? not sure for now, but the US 700 MHz auction have to be followed very carefully cause if this spectrum becomes “free” of the carriers, we don’t know how fast it could go!

– Thomas

Execution engines: understanding the alphabet soup of ARM, .NET, Java, Flash …

[mobile development platforms, execution engines, virtualisation, Flash, Java, Android, Flex, Silverlight.. guest blogger Thomas Menguy demystifies the alphabet soup of mobile software development].

The news at All About Symbian raised a few thoughts about low level software:

Red Five Labs has just announced that their Net60 product, which enables .NET applications from the Windows world to run unchanged under S60, is now available for beta testing.

.NET on S60 3rd Edition now a reality?

This is really interesting: even the battle for languages/execution environment is not settled!

For years Mobility coding was tightly coupled with assembly code, then C and in lesser extent C++. The processor of choice is the ARM family (some others exist, but no more in the phone industry)…this was before Java.

Basically Java (the language) is no more than a virtual processor with its own instruction set, and this virtual processor, also called a Virtual Machine, or JVM in the case of java, simply does what every processor does: it processes some assembly code describing the low level actions to be performed by the processor to execute a given program/application.

On the PC other execution engines have been developed: the first obvious one, the native one is the venerable x86 instruction set: thanks to it all the PC applications are “binary compatible”. Then Java, and more recently … the Macromedia/Flash runtime (yes Flash is compiled in a Byte Code which defines its own instruction set). Another big contender is the .NET runtime…with you guessed what, its own instruction set.

At the end it is easy to categorize the executions engines:

  • The “native” ones: the hardware executes directly the actions described in a program, compiled from source code to a machine dependent format. A native ARM application running on a ARM processor is an example, or partially for a Java program that is running on an ARM with Jazelle (some Java byte code are directly implemented in hardware)
  • The “virtual” ones: Java, .NET, JavaScript/Flash (or ActionScript, not so far from JavaScript: the two languages will be merged with the next version: ActionScript 3 == JavaScript 2 == ECMAScript 4) where the source code is compiled in a machine independent binary format (often called byte code)…But how an ARM emulator running on an x86 PC may be called? you guessed, virtual.

So why bother with virtual execution engines?
Java has been built with the premise of the now famous (and defunct) write once run everywhere, because at that time (and I really don’t know why) people were thinking that it was enough to reduce the “cross platform development issue” to the low level binary compatibility, simply allowing the code to be executed. And we know now it is not enough!

Once the binary issue was fixed, the really big next one were APIs (and to be complete the programming model) … and the nightmare begins. When we say Java we only name the Language, but not the available services, same for JavaScript, C# or ActionScript. So development platforms started to emerge CDLC J2ME .NET framework, Flash, Adobe Flex, Silverlight, Javascript+Ajax, Yahoo widgets … but after all what are GNOME, KDE, Windows, MacOS, S60, WinMob ?…yes development platforms.

The Open Source community has quickly demonstrated that binary compatibility was not that important for portability: once you have the C/C++ source code and the needed libraries plus a way to link everything, you can simply recompile for ARM/x86 or any other platform.

I’ve made a big assumption here: you have “a way to link everything”. And this is really a big assumption: on many platforms you don’t have any dynamic link, nor library repository or dynamic service discovery…so how to expose cleanly your beloved APIs?

This is why OSGI has been introduced, much like COM, Corba, some .NET mechanisms, etc : it is about component based programming, encapsulating a piece of code around what it offers (an API, some resources) and what it uses (API and resources).

Basically an execution engine has to:

  • Allow Binary Compatibility: Abstracting the raw hardware, ie the processor, either using a virtual machine and/or a clean build environment
  • Allow clean binary packaging
  • Allow easy use and exposition of services/APIs

It is not impossible for virtual engines to dissociate the language(s) and the engine: Java …well for Java, ActionScript for Flash, all the # languages for .NET. An execution engine is nothing without the associated build chain and development chain around the supported languages.

In fact this is key as all those modern languages have a strong common point: developers do not have to bother with memory handling, and as all the C/C++ coders will tell you it means around 80% less bugs, so a BIG productivity boost, but also (and it is something a tier one OEM confirmed): it is way to more easyily train and find “low cost” coders for those high level languages compared to C/C++ experts!… another development cost gain.

A virtual execution engine basically brings productivity gain and lower development cost thanks to modern languages ….. but we are far far away from “write once run everywhere”.

As discussed before it is not enough and here comes the real development environments based on virtual execution engines :

  • .NET framework platform : an .NET VM at heart, with a big big set of APIs (this is what I would like to know what are the APIs exposed in Red Five Labs s60 .NET port)
  • Silverlight : also a .NET VM at heart + some APIs and a nice UI framework
  • J2ME: a JVM + JSR + …well different APIs for each platform
  • J2SE: a JVM + a lot of APIs
  • J2EE: a JVM + “server side” frameworks
  • Flex : Adobe Action Script Tamarin VM + Flex APIs
  • Google Android: Java VM + Google APIs,… but more interestingly also C++: as android use Interface IDL description C++/Java interworking will work (I will have to cover it in length at another post)
  • …and the list goes on

What really matters is the development environment as a whole, not simply a language (for me this is where Android may be interesting). For example the Mono project (that aims to bring .NET execution with Linux) was of limited interest before they ported the Windows Forms (Big set of APIs to make graphical stuff in .NET framework) and made them available in their .NET execution engine.

What I haven’t mentioned is that the development costs gain allowed by modern languages comes at a cost: Performance.
Even if Java/.NET/ActionScript JIT helped partially for CPU (Just in Time compilers: VM technology that translates virtual byte code to real machine code before execution), it is still not the case for the RAM used, and in the embedded world the Moore law doesn’t help you, it only helps to reduce silicon die size, to reduce chipset cost, so using a virtual engine actually will force you to … upsize your hardware, increasing the BOM of your phone.

And it isn’t a vague assumption: when your phone has to be produced in the 10 millions units range, using 2MB of RAM, 4MB of flash and an ARM7 based chipset helps you a lot to make money selling at low cost….some some nights/days have been spent optimizing stuff to make it happen smoothly very recently…

Just as an example what was done first at Open-Plug was a low cost execution engine, not virtual, running “native code” on ARM and x86, with a service discovery and a dedicated toolchain: a component platform for low cost phones. Then it has been possible to add a development environment with tools and middle to high services.

A key opportunity may be for a single framework and multiple execution engines for easy adaptation with legacy software and productivity boost for certain projects/hardware, or some parts of the software.

And in this area the race is not over, because another beast may come in: “virtualization” . In the above discussion another execution engine benefit was omitted: this is a development AND execution sandbox. This notion of sandbox and the last argument about performance comes really essential when you need to run a time critical code on one hand and a full blown “fat” OS on another, to be more specific if you need to run a GSM/UMTS stack written on legacy RTOS and an OpenOS (like Linux) on a single core chipset. Today this is not possible, or very difficult: it may be achieved by low level tricks if one entity master the whole system (like when Symbian OS where running in a Nokia NOS task), or with real virtualization technologies like what Virtuallogix is doing with NXP high end platforms. And in that case the cost gain is obvious: single core vs dual core chipset….

But why bother with virtualization and not rewrite the stacks for other OSs? because this is simply not achievable in our industry time frame (nearly all the chipset vendors have tried and failed).

And again the desktop was the first in this area (see VMware and others): Intel and AMD should introduce some hardware to help this process…to have multiple virtual servers running on a single CPU (or more).

So where are all those technologies are leading us? maybe more freedom for software architects, more productivity, but above all more reuse of disparate pieces of softwares, because it does not seem possible to build a full platform from scatch anymore, and making those pieces running in clean sandboxes is mandatory as they haven’t been designed to work together.

Anyway once you know how to cleanly write some code running independently from the hardware, you have to offer a programming model! Implying how to share resources between your modularized pieces of code…and in that respect execution engines are of no help, you need an application framework (like Hiker from access, Android is about that, but also S60 and Windows Mobile, OpenPlug ELIPS, …): It will abstract the notion of resources for your code : Screen, Keypad, Network, CPU, memory, … but this is another story, for another post.

Feel free to comment!