[mobile development platforms, execution engines, virtualisation, Flash, Java, Android, Flex, Silverlight.. guest blogger Thomas Menguy demystifies the alphabet soup of mobile software development].
The news at All About Symbian raised a few thoughts about low level software:
Red Five Labs has just announced that their Net60 product, which enables .NET applications from the Windows world to run unchanged under S60, is now available for beta testing.
.NET on S60 3rd Edition now a reality?
This is really interesting: even the battle for languages/execution environment is not settled!
For years Mobility coding was tightly coupled with assembly code, then C and in lesser extent C++. The processor of choice is the ARM family (some others exist, but no more in the phone industry)…this was before Java.
Basically Java (the language) is no more than a virtual processor with its own instruction set, and this virtual processor, also called a Virtual Machine, or JVM in the case of java, simply does what every processor does: it processes some assembly code describing the low level actions to be performed by the processor to execute a given program/application.
On the PC other execution engines have been developed: the first obvious one, the native one is the venerable x86 instruction set: thanks to it all the PC applications are “binary compatible”. Then Java, and more recently … the Macromedia/Flash runtime (yes Flash is compiled in a Byte Code which defines its own instruction set). Another big contender is the .NET runtime…with you guessed what, its own instruction set.
At the end it is easy to categorize the executions engines:
- The “native” ones: the hardware executes directly the actions described in a program, compiled from source code to a machine dependent format. A native ARM application running on a ARM processor is an example, or partially for a Java program that is running on an ARM with Jazelle (some Java byte code are directly implemented in hardware)
So why bother with virtual execution engines?
Java has been built with the premise of the now famous (and defunct) write once run everywhere, because at that time (and I really don’t know why) people were thinking that it was enough to reduce the “cross platform development issue” to the low level binary compatibility, simply allowing the code to be executed. And we know now it is not enough!
The Open Source community has quickly demonstrated that binary compatibility was not that important for portability: once you have the C/C++ source code and the needed libraries plus a way to link everything, you can simply recompile for ARM/x86 or any other platform.
I’ve made a big assumption here: you have “a way to link everything”. And this is really a big assumption: on many platforms you don’t have any dynamic link, nor library repository or dynamic service discovery…so how to expose cleanly your beloved APIs?
This is why OSGI has been introduced, much like COM, Corba, some .NET mechanisms, etc : it is about component based programming, encapsulating a piece of code around what it offers (an API, some resources) and what it uses (API and resources).
Basically an execution engine has to:
- Allow Binary Compatibility: Abstracting the raw hardware, ie the processor, either using a virtual machine and/or a clean build environment
- Allow clean binary packaging
- Allow easy use and exposition of services/APIs
It is not impossible for virtual engines to dissociate the language(s) and the engine: Java …well for Java, ActionScript for Flash, all the # languages for .NET. An execution engine is nothing without the associated build chain and development chain around the supported languages.
In fact this is key as all those modern languages have a strong common point: developers do not have to bother with memory handling, and as all the C/C++ coders will tell you it means around 80% less bugs, so a BIG productivity boost, but also (and it is something a tier one OEM confirmed): it is way to more easyily train and find “low cost” coders for those high level languages compared to C/C++ experts!… another development cost gain.
A virtual execution engine basically brings productivity gain and lower development cost thanks to modern languages ….. but we are far far away from “write once run everywhere”.
As discussed before it is not enough and here comes the real development environments based on virtual execution engines :
- .NET framework platform : an .NET VM at heart, with a big big set of APIs (this is what I would like to know what are the APIs exposed in Red Five Labs s60 .NET port)
- Silverlight : also a .NET VM at heart + some APIs and a nice UI framework
- J2ME: a JVM + JSR + …well different APIs for each platform
- J2SE: a JVM + a lot of APIs
- J2EE: a JVM + “server side” frameworks
- Flex : Adobe Action Script Tamarin VM + Flex APIs
- Google Android: Java VM + Google APIs,… but more interestingly also C++: as android use Interface IDL description C++/Java interworking will work (I will have to cover it in length at another post)
- …and the list goes on
What really matters is the development environment as a whole, not simply a language (for me this is where Android may be interesting). For example the Mono project (that aims to bring .NET execution with Linux) was of limited interest before they ported the Windows Forms (Big set of APIs to make graphical stuff in .NET framework) and made them available in their .NET execution engine.
What I haven’t mentioned is that the development costs gain allowed by modern languages comes at a cost: Performance.
Even if Java/.NET/ActionScript JIT helped partially for CPU (Just in Time compilers: VM technology that translates virtual byte code to real machine code before execution), it is still not the case for the RAM used, and in the embedded world the Moore law doesn’t help you, it only helps to reduce silicon die size, to reduce chipset cost, so using a virtual engine actually will force you to … upsize your hardware, increasing the BOM of your phone.
And it isn’t a vague assumption: when your phone has to be produced in the 10 millions units range, using 2MB of RAM, 4MB of flash and an ARM7 based chipset helps you a lot to make money selling at low cost….some some nights/days have been spent optimizing stuff to make it happen smoothly very recently…
Just as an example what was done first at Open-Plug was a low cost execution engine, not virtual, running “native code” on ARM and x86, with a service discovery and a dedicated toolchain: a component platform for low cost phones. Then it has been possible to add a development environment with tools and middle to high services.
A key opportunity may be for a single framework and multiple execution engines for easy adaptation with legacy software and productivity boost for certain projects/hardware, or some parts of the software.
And in this area the race is not over, because another beast may come in: “virtualization” . In the above discussion another execution engine benefit was omitted: this is a development AND execution sandbox. This notion of sandbox and the last argument about performance comes really essential when you need to run a time critical code on one hand and a full blown “fat” OS on another, to be more specific if you need to run a GSM/UMTS stack written on legacy RTOS and an OpenOS (like Linux) on a single core chipset. Today this is not possible, or very difficult: it may be achieved by low level tricks if one entity master the whole system (like when Symbian OS where running in a Nokia NOS task), or with real virtualization technologies like what Virtuallogix is doing with NXP high end platforms. And in that case the cost gain is obvious: single core vs dual core chipset….
But why bother with virtualization and not rewrite the stacks for other OSs? because this is simply not achievable in our industry time frame (nearly all the chipset vendors have tried and failed).
And again the desktop was the first in this area (see VMware and others): Intel and AMD should introduce some hardware to help this process…to have multiple virtual servers running on a single CPU (or more).
So where are all those technologies are leading us? maybe more freedom for software architects, more productivity, but above all more reuse of disparate pieces of softwares, because it does not seem possible to build a full platform from scatch anymore, and making those pieces running in clean sandboxes is mandatory as they haven’t been designed to work together.
Anyway once you know how to cleanly write some code running independently from the hardware, you have to offer a programming model! Implying how to share resources between your modularized pieces of code…and in that respect execution engines are of no help, you need an application framework (like Hiker from access, Android is about that, but also S60 and Windows Mobile, OpenPlug ELIPS, …): It will abstract the notion of resources for your code : Screen, Keypad, Network, CPU, memory, … but this is another story, for another post.
Feel free to comment!