Performance Tuning HTML5 In a Mobile Environment

BC Holmes, Raul Vieira, Craig Wedseltoft
 

HTML5 in a mobile environmentThe Project

The Good

The client managed to get their new HTML5-based mobile application into production. The project provided evidence that using HTML5 with their existing web application infrastructure was a successful approach to providing mobile functionality.

The Bad

The project took much longer than anyone expected and was quite a bit more expensive. Their off-shore development partner didn’t have much experience building mobile applications in HTML5, and the learning curve kept biting them in the knees the entire way.

 The Ugly

The bigger and uglier problem was that the users of the application were finding it painfully slow. When our phone rang, this client wanted our help to solve this ugly problem: they wanted us to take someone else’s code and make it faster.

The Story

We already had a relationship with the client. Intelliware had already provided some assistance with finding this project’s low-hanging fruit of performance: getting their caching working, trimming their resources, and helping them introduce the minification of CSS and JavaScript to the build process. We had even moved some big framework libraries so that they would be served up by the device itself, rather than downloading from their web servers. The exercise before us was beyond low-hanging fruit. We needed to make some key structural changes.

Not only were we entertaining different types of changes, we were also trying to accomplish different ends. Our prior work focused on performance enhancement and faster page loads by making the resources faster, downloading them less frequently, and optimizing HTTP/HTTPS. What lay ahead were performance problems related to sluggish response times and other delays between a user action and the application’s response. These findings reinforced the need to make broad changes to the structure of the code.

We knew the project well enough to understand some of the root problems of the code:

  1. It was written by a team that wasn’t terribly experienced writing mobile HTML5 applications. In particular, they approached the design under the assumption that it could be built just like a standard web application with the addition of a few frameworks.
  2. It was written at a time when the project was already late and the development team was exhausted, but still under pressure to deliver.

Under the circumstances, a rewrite of the original code would have been the most prudent approach, but it would have been a hard sell. The cost overruns of the original project had the effect of making the client extremely wary of starting up any major development.

We sought to be strategic about our recommendation. The mobile application included eleven key functions, but the primary feature was the ability to create specific transactions. It was also the function that experienced the worst performance problems. We understand how that came to be: this function was one of the last features delivered (all the read and query features were delivered before the crucial create/update feature), and at a time when the morale of the offshore development team was at its worst. So we crafted a recommendation based around rewriting this one crucial feature.

It was tempting to look at some of the tools used in the application. The app was built on top of jQuery Mobile with a handful of other frameworks for functions such as scrolling. We had better results using tools like Sencha, but we didn’t feel it would be wise to change key frameworks for only one part of the application.

Thus, we proposed a constrained rewrite of the one primary feature, maintaining the original frameworks. There can be no doubt that the best performance usually comes from a native implementation, but the client had good reasons to stick with a predominantly HTML5-based implementation. We weren’t proposing a native rewrite and we weren’t proposing a change in HTML5 frameworks. We were using the same technology stack, designing the “x” functionality around some key guidelines that we believed would give us improved performance. This paper describes the guidelines used.

Architecture

The architecture of the mobile app under consideration was a bit of a hybrid. Many of the early features were implemented as pure-native functions before the organization began building out new features in HTML5. The net result was an app that rendered many of its functions in a UIWebView with content served from a set of web servers. At the same time, it also had a non-trivial native wrapper that handled everything from authentication to error handling, to some of the important transitions.

The server side of the application was built using the .Net platform, with ASP.Net as its presentation technology. In many ways, this tier was built like a traditional web application, but with the addition of jQuery and jQuery Mobile. The original developers expected these frameworks to provide all the mobile-readiness magic.

The business domain was such that it wasn’t meaningful to have any real functionality available when the user has no internet connection. Nonetheless, we had previously ported certain JavaScript libraries onto the device as a strategy to make the page load times faster. For most of the pages, JavaScript and other assets lived on the server where the development team could maintain them most effectively.

Performance Improvement

When we talk about performance, we often mean different things. Our prior work with this client improved the performance of individual page load times. We were also dealing with processing that was dependent on third-party services that we could not control. Therefore, we were unable to change their response time.

For this engagement, many of the complaints we were helping to address related to overall sluggishness. Users would tap a button and it would seem to take a long time for the application to show any signs of activity. Nonetheless, we set our sights on improving the overall snappiness of the app by increasing the responsiveness of the controls. To do this, we established a handful of guidelines that informed our rewrite.

Project guidelines:

  • Minimize full-page reloads
    • The only time the application should load the entire page (and its JavaScript/CSS/image assets) is when the function starts (during the initial page load)
  • Keep the number of event handlers small:
    • Use event delegation where necessary
    • Register event handlers once on an element
  • Ensure that page set-up and event binding happens only once
  • Use the fast-button pattern where applicable
  • Pay special attention to scrolling
  • Be mindful of the type of operations that cause repaints and reflows – keep these to a minimum

The following sections will describe how we applied these guidelines to our project.

HTML 5 Tab ContollerFull-page reload

One of the first areas of improvement we considered was a simple reduction in the number of full-page reloads in the application.

Since the dawn of Internet form tags, full-page reloads have been a common aspect of web applications. The user fills out a form, clicks a submit button and the back-end server sends the user a new page: either the “next” page in a sequence or a new copy of the form page with some error information for the user to review. The browser then replaces the current page with the newly received page.

From the time Ajax-style web application programming became popular, this approach was no longer favoured by modern web applications (for example, Facebook and WordPress.org). More often than not, modern web applications would receive XML or JSON data from the server and then use JavaScript to update the HTML already loaded in the browser. However, this is an aspect of web application development where enterprises are late adopters. Most web applications are completely content to continue with the traditional post-and-receive-new-page pattern.

We would argue that when enterprises are building HTML5 mobile applications, they need to embrace the Ajax-based paradigm. Not only is it easier to manage certain opportunities for error, but it also helps to manage the size of data transferred (and, consequently, performance).

Now, to be fair, jQuery Mobile was contributing a bit of its own magic to the equation.

Although the app was written in the traditional post-then-new-page way, jQM was introducing some Ajax and background loading into the mix. But it wasn’t doing so in a way that improved performance.  It was doing it in a way that made background loading easy for developers. Also, it was completely prepared to sacrifice performance for ease-of implementation.

In the application we were asked to look at, a very common path to creating a transaction could involve five page loads, including the initial page load. There was a time when these pages were in the neighbourhood of 100kb in size. They were burdened with inlined scripts and redundant artifacts. Thus, that one transaction required half a MB of data (five loads of 100kb) downloaded onto the user’s phone, even before resources (JavaScript, CSS, images) were considered. The last page of the transaction flow even included a button to start another transaction, which would reload the first page again from scratch and start the sequence all over again.

As we redesigned these pages, we chose to impose a guideline on ourselves: the only full-page load we’d accept was the initial page load. A segmented control at the top of the page allowed the user to toggle between different variations on the transactions.

Previously, the app invoked a full-page load to switch between the variations. Our revised application used JavaScript to handle the switch. Originally, a “Get Quote” function required a new page load to get a quote. In our revised version, we Ajax-fetched the data as JSON, and inserted the quote details using JavaScript. We used the same guideline on any submit/confirm-your-details screens: Ajax-post the data, and revise the screens using JavaScript.

pencil-eraserThis wasn’t rocket-science in terms of approach, but it did require a major rewrite of the existing code because that code followed a pattern that was out-of-date for HTML5. In essence, we were following a “Single Page Application” (SPA) architecture with this module. The initial page load included all assets (JavaScript, CSS, etc) to perform all functions of the module. Consistent with the Single Page Application architecture, we were ensuring that subsequent interactions with the server were only exchanging data (JSON), not HTML.

We argue that, with only a few exceptions, brand new HTML5 mobile applications should be architected as Single Page Applications: there are obvious performance benefits to being able to minimize the amount of bulky HTML that travels over the wire. In the case of our client’s app, it was impossible for us to retool the whole application as an SPA. Nonetheless, applying SPA architectural approaches to the target module considerably improved the performance.

Event delegation

Our application’s JavaScript serve their purpose by first listening, then responding to events emitted by the Document Object Model (DOM). However, these bindings come at a memory cost. The challenge for our team was to have our scripts react to the same number of events while at the same time reducing our memory foot print. Our solution to this problem was to leverage how events are managed in the DOM and employ a technique called event delegation. To understand event delegation we first need to understand how the browser issues events.

Whether it be a tap or a swipe, all events in a standards compliant browser go through these three phases:

1. During the capturing phase, our event travels from the top of the DOM through the tree of nodes leading to the element where the event originated.

2. In the target phase the event has reached the element where the event was triggered.

3. The final phase of an event is the bubbling phase. Beginning at the target element, this phase refers to the event moving through all of the elements in the target’s parent hierarchy. Anywhere along this path, bound handlers can stop the propagation of the event from proceeding any further up the tree.

This last phase allows us to implement event delegation. Here’s an example, first starting with HTML structure:

<navigation id=”main-nav”>
<a href=”view-1.html” data-role=”none”>First View</a>
<a href=”view-2.html” data-role=”none”>Second View</a>
</navigation>
Instead of using an event handler for each link, our JavaScript binds a single handler at the navigation element instead of the individual links:
 $(‘#contact-us-page’).live(‘pageinit’, function() {
$(‘navigation’).click(function(e) {
var target = e.target;
if(target.nodeName === ‘A’) {
if(target.href === ‘view-1.html’) {
//do something with this
} else {
//do something else
}
e.stopPropagation();
}
});
});

To simplify this example, a ‘data-role=”none”’ was used to tell jQuery Mobile to forgo markup enhancement. If our code were to allow markup enhancement on these elements the handler would change slightly to account for the possibility of a parent or child of the anchor being clicked:

$(‘#contact-us-page’).live(‘pageinit’, function() {
$(‘navigation’).click(function(e) {
var $target = $(e.target).closest('a');
if($target.attr(‘href’) === ‘view-1.html’) {
//do something with this
} else {
//do something else
}
e.stopPropagation();
});
});

Event binding comes at a cost. Each additional even binding gets added to a list that must be traversed every time the user interacts with the application. In essence, adding to the overall response time of the app, making it less performant. While it’s largely true that the benefit of event delegation is reduced memory consumption, its use also contributes to better performance at points where event binding typically occurs – namely during page render and when adding new elements to the document.

At page render time, code binds less and new elements added to the document will not need to have bindings as their event handlers are registered at a higher level. Used in a desktop environment, event delegation is a best practice; in a mobile context its use contributes to a smooth and pleasing user experience.

Register once and only once

Leading up to our work on the rewrite, the other vendor on the project made some additions which caused the application to run slower than usual. Even more concerning was that after extended use of key functions, the application performance would degrade to unusable levels on a mobile device

After our review of the code we found several pages in the application to be registering event handlers on the same DOM elements repeatedly. Another guideline emerged for the rewrite: register event handlers once on an element!

This probably seems like a given, and working on a desktop website you’ve most likely never had to deal with such a scenario. That’s because of the inherent nature of a website: the browser loads a page and creates a new JavaScript context, you bind your handlers to events emitted by elements, you handle those events, the user requests a new page and a new JavaScript context is established. All previous event bindings are destroyed.

A single page web app, or SPA, is different. With this architecture, one JavaScript context is created and maintained until the browser is terminated. In other words, because no new documents are loaded and only partial updates occur, all event bindings remain unless they are replaced or explicitly removed.

jQuery Mobile makes it trivial to create a SPA with slide transitions akin to native experiences. By hijacking link clicks and preventing the browser’s default behavior of replacing the current document with a new page, jQM instead makes an Ajax request for the target page and extracts only part of the response (i.e., the part of the response demarcated with a ‘data-role=page’ attribute). The fragment is then appended to the DOM and animated into view. So what does this mean for script binding?

Firstly, none of the scripts referenced in the head of a dynamically loaded page are loaded.

Consequently, all bindings need to be defined in the containing page’s script references. The existing code base of the application had event bindings in script blocks contained in the appended document fragments. This posed a significant issue. As the user navigated the application and revisited a page a new handler would be registered for the same event. This was less of a concern for us with our JSON-based design.

Secondly, and most importantly, jQM pages have a lifecycle. For example, registered handlers can be notified before a page has been requested, when the response has been appended to the document, or when a page is actually presented to the user. This last case was the most cited culprit found in the existing code base.

Concretely, the user would arrive at a start page in the page flow and on the page’s pageShow event button tap handlers were registered. The user would then progress to the next page in the flow, a confirmation page for example, and decide to return to the previous page. Herein lays the problem. Because the event handler was registered when the page is shown we now have a duplicate binding. As we’ve already stated this affected performance, but more importantly it would result in incredibly strange behaviour – duplicate form submissions and transitions – to name a few.

Our solution was straight forward: no bindings in page events occurring multiple times on a given view. This meant not adding bindings in any of the following events:

  • pagebeforechange
  • pagechange
  • pagebeforeshow
  • pageshow

More usefully, we would bind event handlers in jQM’s pagecreate event. This event runs when the page has been added to the DOM, either via Ajax or in the case of a multi-page template. In cases where we dynamically add elements to an existing page, event bindings for those elements would be registered on demand.

As we’ve already stated, this guideline not only improved performance, it also led to the delivery of a properly functioning application.

Fast button problem

A common complaint about the existing HTML5 application was the responsiveness of the buttons. It seemed that no matter what button the user pressed, there was a noticeable delay until the application responded with some kind of visual feedback. This was a poor user experience we sought to rectify as part of our performance-related work.

Even though HTML5 allows us to emulate a native experience on mobile platforms in a way that is truly cross-platform, by default the responsiveness of buttons is nowhere near native-like. The reason for this is that, by design, there is 300ms delay from when the user taps on an element to when the click event actually fires. The delay is necessary in order to distinguish between a single and a double tap. Mobile Safari doesn’t emit dblclick events on elements, so the only reason to keep the double tap delay would be to preserve the double tap to zoom feature.

In our application, like the vast majority of other HTML5 apps emulating native apps, there was no need to zoom with a double tap, so we decided to use a little JavaScript trickery to remove the 300ms delay and made the responsiveness of the buttons more native-like. We accomplished this by leveraging the fact that touchend events fire immediately, with no delay. However, we couldn’t just bind our click event handlers to touchend events because that would have resulted in some unexpected behaviour and could have broken existing code.

Some considerations had to be made:

  1. We had to make sure that click events weren’t fired if a touchend event was preceded by some significant touchmove events. The user had to be able to scroll the page or perform gestures without triggering a click if their finger happened to start on the button or land on the button.
  2. We wanted to preserve the default feedback that is applied to buttons on client events (i.e. highlighting).
  3. We had to preserve backwards compatibility with legacy code already bound to click events.

Google Developers LogoGoogle’s solution

Luckily, we were not the first company to encounter the button-responsiveness problem in HTML5 apps. Google provided a solution that took issues (1) and (2) into consideration, so we didn’t have to completely reinvent the wheel.

To understand the solution to problem (1) it is important to first understand taps and scroll gestures in terms of touch events. Ignoring the mouse events mousedown, mouseup, and mousemove, we have:

  • Tap gesture: touchstart, touchend, click
  • Scroll gesture: touchstart, touchmove, …, touchmove, touchend

To get our handlers to execute on the touchend event of a tap gesture, but not on a scroll gesture, we need to monitor the touchmove events. If the distance between the (x,y) coordinate of the touchstart event and the subsequent touchmove events exceed a certain threshold, then we don’t handle the following touchend event as a click. And of course, we only want to handle a touchend as a click if the touchstart and touchend events are targeted at the same element.

google.ui.FastButton = function(element, handler) {
this.element = element;
this.handler = handler;
element.addEventListener('touchstart', this, false);
element.addEventListener('click', this, false);
};
google.ui.FastButton.prototype.handleEvent = function(event) {
switch (event.type) {
case 'touchstart': this.onTouchStart(event); break;
case 'touchmove': this.onTouchMove(event); break;
case 'touchend': this.onClick(event); break;
case 'click': this.onClick(event); break;
}
};
google.ui.FastButton.prototype.onTouchStart = function(event) {
event.stopPropagation();
this.element.addEventListener('touchend', this, false);
document.body.addEventListener('touchmove', this, false);
this.startX = event.touches[0].clientX;
this.startY = event.touches[0].clientY;
};
google.ui.FastButton.prototype.onTouchMove = function(event) {
if (Math.abs(event.touches[0].clientX - this.startX) > 10 ||
Math.abs(event.touches[0].clientY - this.startY) > 10) {
<pthis.reset();
}
};

The solution to problem (2), preserving the browser’s default feedback, is also taken care of in Google’s solution. They found that simply attaching the handler to the click event was sufficient to preserve the browser’s default highlighting of the button while in the pressed state.

The downside to adding a click handler is that the handler ends up getting executed twice. The fast button implementation executes the handler after the touchend event, and then the handler gets executed again about 300ms later on the click event.

The solution was to introduce a click buster. The click buster works by listening for click events on the document in the capturing phase, and kills the events already handled on a touchend event.

google.clickbuster.preventGhostClick = function(x, y) {
google.clickbuster.coordinates.push(x, y);
window.setTimeout(google.clickbuster.pop, 2500);
};
google.clickbuster.pop = function() {
google.clickbuster.coordinates.splice(0, 2);
};
google.clickbuster.onClick = function(event) {
for (var i = 0; i < google.clickbuster.coordinates.length; i += 2) {
var x = google.clickbuster.coordinates[i];
var y = google.clickbuster.coordinates[i + 1];
if (Math.abs(event.clientX - x) < 25 && Math.abs(event.clientY - y) < 25) {
event.stopPropagation();
event.preventDefault();
}
}
};
document.addEventListener('click', google.clickbuster.onClick, true);
google.clickbuster.coordinates = [];

A jQuery plugin for backwards compatibility with legacy code

The only problem we had with the Google Fast Button implementation was that it employs a proprietary API for adding event handlers. We had a lot of legacy code that already had button handlers bound to click events. We didn’t want to risk breaking any existing code by changing the event bindings, so we came up with a simple solution to provide backwards compatibility. We created a small jQuery plugin that will take any element with a click handler and turn it into a Fast Button:

(function($) {
$.fn.createFastButton = function() {
return this.each(function(index, element) {
var $element = $(element);
if(!$element.hasClass('fastbutton')) {
$element.addClass('fastbutton');
var fastbutton = new google.ui.FastButton(element, function(e)
{
$element.trigger('click');
});
}
});
};
})(jQuery);

Turning an existing element with a click handler into a Fast Button is as simple as $(‘a’).createFastButton();

Minimizing repaints and reflows

At the core of our reimplementation strategy was JSON as the data interchange format and the composition of the UI using JavaScript. Sounds easy enough: Ajax requests sending or receiving JSON, and methods creating or updating DOM elements.

Most programmers coming from the desktop web will, unknowingly, introduce the clunkiness typical of most HTML5-based mobile apps by not being aware of browser repaints and reflows. These ill effects would mostly go unnoticed on the desktop, but in the resource constrained environment of the mobile phone they manifest themselves in terrible flickers and unresponsive displays. Though repaints and reflows are unavoidable we set out to minimize their impact on the user’s experience.

Put simply, a repaint occurs when the skin of an element changes. For example:

  • changes in text colour
  • changes in background colour
  • changes in border colour
  • changes to the visibility style property

Reflows occur when the geometry of an element is affected. For example:

  • changes in padding
  • changes in height or width
  • changes to the display style property

An element reflow will actually force parent elements and any elements following the reflowed element to also be reflowed. Consequently, a seemingly innocuous DOM update can reflow the entire page leading to less than desirable results. We navigated these issues by adhering to the following principles:

  1.  Apply animations to absolutely positioned elements
  2. Apply CSS classes to elements lower in the tree instead of higher level elements
  3. Keep the HTML as simple as possible
  4. Avoid asking for an element’s dimensions repeatedly
  5. Create new nodes off-line and append them to the document once

Points 4 and 5 are worth further explanation.

Avoid asking for an element’s dimensions repeatedly

During the course of our application’s lifetime we may need to calculate the dimensions of an element to perhaps position another element appropriately within it. As an optimization during the render process, the browser caches element geometries. However, in order to provide our scripts with the latest and most accurate information, the browser must first reflow the element. This deserves a couple of examples.

First, let’s have a look at an undesirable approach. Here we loop 50 times and query an element’s width during each iteration:

var containerWidth;
for(var i = 0; i < 50; i++) {
containerWidth = $(‘#an-element-container’).width();
//use the width
}

In this example the browser will reflow the document in each loop.

A more useful approach is to ask the browser for the element width once:

var containerWidth = $(‘#an-element-container’).width();
for(var i = 0; i < 50; i++) {
//use the width

}

Create new nodes off-line and append them to the document once

Appending or replacing new elements to the DOM was fundamental to our application’s design. Our challenge was to update the document without sacrificing the user experience. This is where the documentFragment came into play. Let’s first see how a naive approach to appending elements to the document may cause numerous reflows:

var $ul = $(‘#my-unordered-list’);
for(var i = 0; i < 50; i++) {
$ul.append(‘<li>’ + i + ‘</li>’);
}

The point most often overlooked is that $ul is a live DOM element and changes made to it are visible immediately. In this case the document will reflow 50 times. You would most likely see this code when doing desktop web development, and in most cases you wouldn’t consider it a problem. On a phone, however, things will be different. The next example uses a documentFragement to minimize the impact of the reflow:

var fragment = document.createDocumentFragment(),
$ul = $(‘#my-ul”);
for(var i = 0; i < 50; i++) {
fragment.appendChild($(‘<li> + i + ‘</li>’));
}
$ul.html(fragment);

The documentFragment does not live in the document and is just a container for elements. In this example we append 50 list item elements to the fragment and have its contents appended to the unordered list once; significantly reducing the amount of document reflow.

Using these techniques contributed to a much improved user experience.

Evidence

We implemented these changes under an aggressive time scale, and put the revised code into production. Our timeframe was really measured in a few short weeks and this was a drop in the bucket compared to the huge timescales of the original development project.

A reasonable question to ask is: did our changes actually improve the performance of the application? Our final deliverable for this engagement was a presentation coupled with some actual performance measures. We were excited to be able to report on some real, quantifiable improvement. But more importantly, by the time we presented these findings, the business stakeholders had had an opportunity to use the retooled app and were able to report that the app felt noticeably faster than before. Often, that recognition is more meaningful than the raw data; perception often means far more than statistics.

One of the most straight-forward measures was the simple responsiveness of button pushes.

As we said earlier, users perceived the application as sluggish because it didn’t respond snappily to user actions. We were able to instrument our JavaScript to quantify the response time of standard buttons versus Fast Buttons.

The following chart compares the old and new implementations.

button-tap-response-time

At the level of raw numbers, the findings aren’t dramatic. Our new implementation shaves approximately 375ms off the button response time – less than half a second. Stated another way, that improvement dropped the response time to one-sixth of its original measure. Most importantly, this improvement was key to the user’s perception of responsiveness.

Another truth we knew about the performance of the application was that its performance degraded over time. The combination of needing to perform full-page reloads and re-registering events over and over again had the effect of making the application start off slow and get progressively worse.

The previously mentioned toggle buttons are a good example. The users can employ a segmented control to toggle between different types of transaction creation screens. In the original design, each time the user pushed a toggle button, a new full page would be requested from the server (but loaded in the jQM Ajax load thread) and be initialized in the web view (registering a series of jQM event handlers – sometimes causing the same events to be registered multiple times). This design was painfully slow.

We stop-watched the same process using the original code and our revised code. In the revised code, we allowed only one full-page reload and we were careful about when we allowed events to be registered.

Toggle Response TimeAs we can see, the original code started with poor performance (approximately 2 to 5 second response time) and degraded to embarrassing performance (almost half a minute). The revised code was not only considerably faster but it also maintained a level performance as the user interacted with the application.

timerObviously much of the poor initial response time relates to the cost of loading new HTML from a server, but the degradation was more a function of too many events. By contrast, we also stop watched a function that had no server round-trip component. One of the common types of user interaction involves tapping a control which slides in a selection screen. The user needs to choose an option from the selection screen; several different types of selections are used to create the transaction.

Once again, we observed the same performance degradation using these selection screens. As the user continued to interact with the original application, more and more event handlers would be registered, and the performance would degrade significantly.

Below is a graph of those timings:

Picker Launch Response Time

Here, the initial measures are informed by the time it takes to “slide in” the new selection screen. In the original code, this time got worse and worse as the user interacted with the app. In the revised code, the application maintained a consistent performance over time.

Conclusions

There are many important advantages to using HTML5 in mobile applications: cross platform use, reuse of existing skill and/or tool investments, maintainability considerations, etc. But at the end of the day, every user is going to judge an HTML5 application by a set of expectations that have been established by comparable native apps. That’s not an insurmountable challenge, but here’s the rub: it’s too easy to get performance wrong.

If you chose HTML5 for your mobile app because you’ve got a bunch of enterprise web developers on staff and you figured that a framework like jQuery Mobile will let them plug their web-shaped pegs into a mobile-shaped hole, then you need to expect that the performance of the first deliverable is going to be bad. If you don’t like that outcome, then you need to plan differently.

First, you need to look at your technology choices:

  1. Consider whether or not HTML5 is really the right choice for you. There are real benefits to be had from an HTML5 implementation, but don’t be oblivious to the risks.
  2. If you’re committed to HTML5 consider what JavaScript framework you’re going to use. Pay attention to what people are saying about the comparative performance of Sencha, Dojo Touch and jQuery Mobile.
  3. Come up with a list of guidelines for developers, like the ones we established. Better yet, implement the first function, and ensure that it performs appropriately. Then use that implementation as an exemplar for the other functions.
  4. Spot-check performance periodically throughout the development timeline.

A well-implemented HTML5 application can be a thing of beauty. But this development approach is still young enough that such an outcome is not guaranteed.

About the Authors

BC Holmes is an IT architect with 20 years of experience designing and building applications, often working with new technology trends. She was the architect and technical lead for the team that implemented the first web banking application in Canada, and now she works in Intelliware’s Mobile Centre of Excellence.

She holds a Joint Honours in Pure Mathematics and Theatre Arts.

Raul Vieira is a full stack enterprise application developer who specializes in building rich user experiences using web standard technologies (JavaScript,CSS, HTML). He is an Agile and XP practitioner with over ten years of experience, and has been focusing his time at Intelliware on building apps in the ICT, Retail and Financial Services sectors.

Craig Wedseltoft has over five years experience with front-end development and specializes in HTML5, JavaScript, and CSS. He graduated from the University of British Columbia with a degree in mathematics. He works in Intelliware’s Mobile Centre of Excellence.

Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someone

Related Posts

Mobile delveopment so many options

When it comes to mobile development, there are so many options

Today, companies have realized the power of mobile solutions to drive opera- tional efficiencies and deepen brand engagement. A successful mobile solution deliv- ers tangible value – it leverages enterprise data systems and services to become a true extension of the business. But enterprises continue to struggle with how to manage this rapidly evolving mobile… Read more »

Intelliware Development sponsors AndroidTO conference

Android market reaches new heights, local conference anticipates biggest event yet Toronto, Canada.  September 24, 2013 – AndroidTO, Canada’s premier Android development conference returns to Toronto on October 17, 2013 for its fourth consecutive year. Intelliware Development Inc., Toronto-based custom software and mobile solutions firm, is pleased to announce it is sponsoring the popular event…. Read more »

viewpoint-going-native

Going native: cheaper than you may think

A commonly held belief is that writing native mobile apps is expensive, especially when you need to support multiple platforms. A related belief is that writing apps using HTML5 or a Mobile Application Development Platform (MADP) tool entirely solves the cost issues associated with cross platform application support.