Adobe defends its mobile strategy

[Is Adobe’s mobile strategy doomed? Mark Doherty guest author and Platform Evangelist for Mobile and Devices at Adobe responds to the recent criticism and argues that the best is yet to come]

The Big Picture
Adobe’s vision – to revolutionize how the world engages with ideas and information – is as old as Adobe itself, in fact 28 years ago the company was founded on technologies like PostScript and later PDF that enabled the birth of desktop publishing across platforms.

Today Flash is used for the 70% of online gaming and 75% of video; driving innovation on the web for over a decade. Flash Player’s decade long growth can be attributed to three factors:

  1. Adobe customers such as BBC, Disney, EPIX, NBC, SAP and Morgan Stanley can create the most expressive web and desktop applications using industry leading tools.
  2. The Flash Player enables unparalleled cross platform consistency, distribution and media delivery for consumers on the desktop (and increasingly on mobile)
  3. A huge creative community of designers, developers, illustrators are involved in defining Flash, and hence driving the web forward.

Now, as consumers diversify their access to the web they are demanding the same experiences irrespective of the device.  Content providers and OEMs across industries recognize this trend and are delivering Flash Player and AIR as complimentary web technologies to extend their vertical propositions.  The process of actually delivering this is not trivial, and was made more complex by a failing global economy, but we are on schedule and the customer always wins.

Where we ‘ve been
The success of Flash on mobile phones has been second to only Java in terms of market penetration, but second to none in terms of consistency.  According to Strategy Analytics, Flash has been shipped on over 1.2 Billion devices, making it the most consistent platform available on any device.

Adobe announced in 2008 a new strategy for reseeding the market with a standardised Flash single runtime, creating the Open Screen Project, an alliance of mobile industry partners to help push this new vision.  So why the change of plan?

In the historically closed, or “wild west” that is the mobile ecosystem, web content providers and developers have found it too difficult to reach mobile devices. In practical terms, it was too difficult for the global Flash community to reach consumers, and to do that in a manner consistent with the consumer reach of desktop content.  Japan has been the most successful region because of deep involvement from NTT DoCoMo and Softbank, and by enabling the use of consistent web distribution.

That said, agencies such as Smashing Ideas, ustwo and CELL (sorry to those I’m missing out) have established valuable businesses in this space by building strong partnerships with OEMs.

On the top end of this success scale, Forbes recently announced Yoshikazu Tanaka has become the first Flash Billionaire with the incredibly successful Flash Lite games portal Gree in Japan.  (Gree is a “web service”, not desktop or mobile, and is indicative of what can be achieved using Flash as a purely horizontal technology across devices)

In all, our distribution and scaling plans worked very well for Adobe, but outside Japan the mobile “walled gardens”, and the web on devices today, didn’t work for our customers.  The cost of doing business with multiple carriers in North America and Europe and the lack of web distribution to a common runtime left our customers with few choices. It was time for a new plan.

Open Screen Project
Delivering on the Open Screen Project vision at global scale with 70 partners is a huge task; it was always going to take about two years.  We are very much on schedule with Flash Player 10.1 and AIR, although eager to see it rollout.

However, describing the goals of the Open Screen Project in terms of dates, forecast market share, Apple’s phone or their upcoming tablet, specific chipsets or Nokia hardware is to miss the whole point.  The Open Screen Project is not a “mobile” solution; it’s about the global content ecosystem.

In summary – connecting millions of our developers and designers with consumers via a mix of marketplaces and the open web.

Google and Microsoft are great examples of companies that have competitive technologies and services, but both companies still use Flash today to reach consumers.  Google use Flash for Maps, Finance and youtube, and Microsoft for MSN Video and advertising.  So indeed we have a co-opetition between Silverlight and Flash, or Omniture and Google Analytics, but together our goal is to enable consumers to browse more of the web on Android, Windows Phone and other devices in the future.

Today, over 170 major content providers (including Google) are working with us right now to optimize their HTML and Flash applications for these mobile devices.  In the coming months we’ll begin the long roll out process, updating firmware, enabling Flash Player downloads on OEM marketplaces.  We’re projecting that by 2012, 53% of smartphones will have Flash Player installed.

It’s really exciting to see it coming together and so many big names involved, why not have a peek behind the curtain?

Flex Mobile Framework
To enable the creation of cross-platform applications even simpler Adobe is working on the Flex Mobile Framework. Essentially we have taken all the best elements of the open source Flex 4 framework and optimized it for mobile phones.

Using the framework and components you will be able to create applications that can automatically adapt to orientation and layout correctly on different screens. The most important addition is that the Flex Mobile Framework “understands” different UI paradigms across platforms. For example, the iPhone doesn’t have a hard back button and so the Navigation bar component will present a soft back button on that platform.

In terms of developer workflow we expect that all background logic of applications will run unchanged.  User interfaces and high-bitrate video will need some adjustments for some hardware, though most changes will be basic changes like bigger buttons, higher compression videos and to adapt HTML for mobile browsers.

Over time with the Flex Mobile Framework, our goal is to enable our customers to create their applications within a single code base, applying some tweaks for each platform for things like Lists, Buttons or transitions.  In this sense we can expect to enable the creation of applications and experiences that are mobile centric, and yet cost effective by avoiding fragmented solutions where appropriate.

We are aiming to show the Flex Mobile Framework later in the year, and I’d love to see it supported in Catalyst in the future.

The Year Ahead
Throughout 2010 we will see Flash Player 10.1 on Palm’s WebOS, Android 2.x, with Symbian OS and Windows Phone 7 coming in the future. In addition to that we also have plans to bring Flash Player 10.1 to Blackberry devices, netbooks, tablets and of course the desktop. For less powerful feature phones we’ve got Flash Lite, and all of these platforms will demonstrate Flash living happily with HTML5 where it’s available.

Adobe AIR 2 is also in beta right now, enabling users to create cross-platform applications that live outside the browser on Windows, Mac and Linux computers. AIR is of course mobile ready, and later in the year we’ll be bringing AIR to Android phones, netbooks and tablets. On top of that, you will also be able to repackage your AIR applications for the iPhone with Flash Professional CS5 very soon.

The rollout and scale of Flash Player and AIR distribution over time are now inevitable, and largely committed over a year ago.

There are risks of course; these ecosystems are moving targets just like they have always been.  However, I’m extremely confident that we can build upon our previous successes, learn from our mistakes and innovate faster than any of our competitors.

– Mark Doherty
Platform Evangelist for Mobile and Devices at Adobe

UI Technologies are trendy…but what are they really good for?

[UI development flow and actors: Graphical Designer, Interaction Designer, Software Engineer, classical technologies: GTK, Qt, next generation: Flex, Silverlight, WPF, TAT, XUL, SVG… guest blogger Thomas Menguy describes what are the main concepts behind all the UI technologies, what the new generation ones have in common, what those modern approaches are bringing to the product development flow…and what is missing for the mobile space].

A good UI is nothing without talented graphical designers and interaction designers: How the plethora of new UI technologies are helping unleashing their creativity? What are the main concepts behind those technologies?Let’s try to find out!

UI is trendy… thank you MacOS X, Vista and iPhone!

image

image

image

UIQ

S60

iPhone

Put the designers in the application development driver seat!

Here is a little slide about the actors involved in UI design

image

UI flow actors and their expertize

What does it mean?

Different actors, different knowledge …. So different technologies and different tools!

Those three roles can be clearly separated only if the UI technology allows it. This is clearly not the case in today mainstream UI technologies where the software engineer is in charge of implementing the UI and the service part, most of the time in C/C++ , based on specifications (word document, Photoshop images, sometime adobe flash prototypes), that are subject to interpretation.

  • The technologies used by the designers have nothing in common with the one used to do the actual UI.

  • The technologies that allow UI implementation…require an heavy engineering knowledge.

  • Big consequence: the software engineer decides at the end!

The picture is different for web technologies where it has been crucial and mandatory to keep strongly uncorrelated the service backend from its representation : Web browsers have different API and behavior, backend have to be accessed by many other way than web representation…and above all data is remote and presentation is “half local/half remote”.

Separating representation, interaction and data has been the holly grail of applications and services development for years. It has been formalized through a well known pattern (or even paradigm in that case) : MVC (Model View Controller)

image

MVC pattern / source: wikipedia
From wikipedia: http://en.wikipedia.org/wiki/Model-view-controller
Model
The domain-specific representation of the information on which the application operates. Domain logic adds meaning to raw data (e.g., calculating if today is the user’s birthday, or the totals, taxes, and shipping charges for shopping cart items).
Many applications use a persistent storage mechanism (such as a database) to store data. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the Model.
View
Renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes.
Controller
Processes and responds to events, typically user actions, and may invoke changes on the model.

All the UI technologies are offering a way to handle those 3 aspects and, as a consequence, are providing a programming model defining how information and events flow is handled through the MVC.

See below a simple schema I’ve made describing a GTK application: when you look at an application screen, it is made of graphical elements like buttons, lists, images, text labels, called widgets (or controls) .

Rmk: the term “widget” is used with its literal meaning : “window-gadget”, this term is now used a lot in web 2.0 marketing terminology and by Yahoo/Google/MS to represent a “mini application” that can be put on a web page or run through an engine on a desktop PC or a mobile phone, to avoid confusion I prefer the term of “control” over widget for the UI technologies, but will continue using “widget” in the rest of the GTK example as it is the term used by GTK itself.
Widgets are organized hierarchically in a tree, meaning that a widget can contain other widgets, for example a list can contain images or text labels. In the example below the “root” widget is called a “Window”, it contains a kind of canvas which itself contains a status bar, a title bar, a list and a softbutton bar. Then the list contains items, the title bar has a Label, the softbutton bar contains some buttons and so on.

A widget is responsible for

  • Its own drawing using a low level rendering engine, called GDK in the GTK case (GDK offers API like draw_image, draw_text, etc).
  • Computing its size according to its own nature (like the size of the text that will be displayed for example) and the size of its sons.
  • Reacting to some events and emiting some specific ones: the button will emit a “press event” when it is pressed with the touchscreen or when its associated keypad key is pressed.

The widget tree will propagate system events (keypad/touchscreen, etc) and internal events (redraw, size change, etc) through the widgets. The developer will register callbacks (in fact functions, piece of code implementing a functionality) that will be called when widgets will fire events (like the “press event”) .

image

GTK Widget tree structure: a phone screen example

The major GTK/gLib formalism is how those events/callback are handled: through what is called a “gloop” where all events are posted in the loop queue, dequeued one by one and “executed” in this loop, meaning their associated user callbacks will be called. This loop is running in one thread. This is what we call a programming model. In nearly all the UI technologies such a loop exists with various formalisms for the queue handling, event representation, etc.

To finish with the above schema the user callback will then access to the middleware services, the various databases and so on.

There is no clear MVC formalism in that case, the controller is mixed with the view …and even the model that is mixed … with the widgets! (so with the view)

Qt Model is really identical to the this one.

One last point very relevant for application development and design: the notion of states. Each application is in fact a state machine displaying screens linked by transitions, like in the example below where in the state 1 the user decides to write an SMS, it will open an SMS editor screen and clicking send will go to a selection of phone numbers.

image

Application State Machine: write sms example

Here is an attempt to formalize a modern UI framework with Data binding (for Model abstraction).

image

UI engines formalization
Control: equivalent to a widget but where the MVC model is fully split. A Data Model as to be associated alongside with a Renderer to make it usable.
Control Tree: equivalent to the widget tree: aggregation of Controls, association of the controls with a Renderer and a Data Model. Possibly specification of Event Handlers.
Data Model: Object defining (and containing when instantiated) a set of strongly defined and typed data that can be associated with a Control instance.
Data Binding: Service used to populate a Data Model.
Control Renderer: Object that is able to graphically represent a Control associated with a Data Model, using services from a Rendering Library.
Rendering Library: Set of graphical primitives, animations, etc.
Event Handling (and Event Handler): code (any language) reacting to events and modifying the current state machine, the Control Tree, etc.
Standardized Services: Interfaces defined to access middleware directly from the event handling code.
Server Abstraction: Possibility to transparently use Data Binding or any service call locally or remotely.

Ok if you are still there, and your brain is still functional, here is what’s happening today in this area….

In traditional UI frameworks like GTK, Qt, win32, etc the control tree description is done with a C/C++ description … a little niche technology have paved another way: it is called HTML: after all an HTML web page description is defining a tree of controls, W3C use a pedantic term for it : the DOM tree. JavaScript callbacks are then attached to those widget to allow user interaction. It is why all the new UI technologies are based on an XML description for this tree, it is muuuuuch more easier to use, and allow a quicker description of the controls, and above all it allows nice design tools to manipulate the UI….Apart from this XML representation the majority of the UI technologies are coming with:

  • An animation model, allowing smooth transitions, popularized by the iphone UI, but it was already there in MXML (Adobe Flex Format), XAML (MS format), SVG, TAT offer….
  • Modern rendering engines (Flash for Flex, MS has one, TAT Kastor).
  • Nice UI tools for quick implementation: Adobe Flex Builder, MS Expression line, TAT Cascades, Digital Airways Kide, Ikivo SVG…
  • In many case : a runtime, to be able to run a scripting language.

Here are some quick tables, really not complete, of some of the most relevant UI technologies in the PC and mobile phone space.

Just to explain the columns:

  • RIA : Rich Internet Application, delivered through a browser plugin
  • RDA : Rich Desktop Application: delivered through a desktop runtime
  • Runtime: ok, galvoded name here, just a name to represent the piece of technology that allows the UI to run
  • UI: Technology to describe the control tree (you know what it means now!)
  • Event Handling: the dynamic UI part, and how to code it (which languages)
  • Tools: UI tools

 

image

RIA&RDA Chart

 

image

Embedded rich UI technologies

So its time to answer the main point of this post: How those technologies are helping unleashing designers creativity? By defining a new development flow, allowing each actors to have a different role.

Here is an Adobe Flex “standard development flow:

image

Adobe Flex&Air tools flow

In the next schema I try to depict a more complete Adobe Flex flow, adapted to the mobile world, where, for me, a central piece is missing today: it is not possible now to expand “natively” the adobe air engine, this is mandatory for mobile platform with very specific hardware, middleware, form factors.

So I take the adobe flow more as an example to demonstrate how it should work, than as the paradigm of the best UI flow because this is not the case today (same remarks for MS flow, less true for TAT for example)

 

 

image

An Adobe UI design flow for embedded

This shows clearly that the “creativity” phases are clearly uncorrelated: different tools are used between the designers, they can do a lot of iterations together, without any need of the software engineer. This one can focus on implementing the services needed by the UI, optimizing its platform, adding middleware features.

  1. Interaction Designer defines the application high level views and rough flow
  2. Graphical Designer “draws those first screens”
  3. Interaction Designer import it through Thermo
  4. Graphical Designer designs all the Application graphical Assets
  5. Interaction Designer rationalizes and formalize what kind of data, events and high level services the application needs
  6. Interaction Designer & Software Engineer are working together on the above aspects HINT: A FORMALIZM IS MISSING HERE once done:
  7. Software Engineer prepares all the event, data services, test it unitarily, in brief: prepare the native platform
  8. Interaction Designer continues working on the application flows and events, trying new stuffs, experimenting with the Graphic Designer based on the formalism agreed with the Software Engineer.
  9. Once done … application is delivered to the Software Engineer that will perform target integration, optimization…and perhaps (hum certainly) some round trip with the other actors 🙂

So this is it! this flows really focus on giving power to the designers…taking it from the engineer hands. Some technologies are also missing to really offer a full Mobile Phone solution:

  • All the PC technologies are about building ONE application and not a whole system with strong interaction between application. With today technologies, the designers are missing this part….leaving it to the engineer: How to cleanly do animation between applications?
  • Strong theming and customization needed for:
    • product variant management: operator variants, product variants (with different screen sizes and button layouts for example), language variant (in many phones 60 languages has to be supported, but in separated language packs).
    • A not well known one: Factory line fast flashing of those variants. It is very long to flash the whole software of a mobile phone while on the factory line … so if you are able to have a big common part and a “customization” part as little as possible but with the full UI…you gain productivity…and big money 🙂
  • Adapted preset of widget or controls (try to do a phone with WPF or Flex…all the mandatory widgets are missing)

Anyway an UI technology is only the way to interact with a user…to offer him a service. Most of the technologies presented above are about service delivery and not only UI…My next post will be about this notion of service delivery platforms.

[Update] Replaced the term “ergonomics specialist” by “Interaction Designer”, thanks Barbara, see comments below.

Thomas Menguy