A ‘Step backwards for Gestural Interfaces’ or for the NN’s Group?

Just finished reading the article Gestural Interfaces: A Step Backwards In Usability by Donald Norman and Jakob Nielsen.
I would suggest reading the article before reading my critique below.

My immediate gut feeling was that their article didn’t hit the mark this time. Despite my admiration for Donald Normand and sharing a similar (HCI) background with both of them, I feel they failed to acknowledge the degree of change that iPhone (and the following touch screen smartphones) have brought to the Interaction Design field.

In fact, you can agree or disagree with their analysis at a granular level, but I think they made a very wrong claim at the beginning of their article:

“… the place for such [i.e. gestural interfaces] experimentation is in the lab. After all, most new ideas fail, and the more radically they depart from previous best practices, the more likely they are to fail. Sometimes, a radical idea turns out to be a brilliant radical breakthrough. Those designs should indeed ship, but note that radical breakthroughs are extremely rare in any discipline. Most progress is made through sustained, small incremental steps. Bold explorations should remain inside the company and university research laboratories and not be inflicted on any customers until those recruited to participate in user research have validated the approach.”

Their mindset is coming from the ’70s and ’80s to HCI, where interaction design was still in its infancy and the Usability Lab was the Holy Grail.

Well, Mr Norman and Mr Nielsen, times have moved on since then. The pace of innovation in interaction design has changed; it is not measured in years anymore, but in months or even weeks. There is an incredible amount of “bold exploration” in the mobile space, where Google and Apple are the key players – and the others are just following. Mobile patterns are unfolding into other spaces as well, with interaction design patterns gradually spreading to the web and ‘desktop’ interfaces.

spacer

Sure, there are problems with rapid, agile development approaches. Usability issues are pretty obvious, especially with Android. But we cannot expect companies to wait for the usability people to test the devices to death before release. The “incremental steps” are going to happen in the market, not in the lab; Apple and Google can roll out these changes in a matter of week, with an OS update.

In my view, the smartphones market is still in the early adopters, ‘high technology’ phase (see below); as soon as the market matures and crosses the ‘transition point’, people will choose their phone based not on technology and features, but on the quality of user experience; Norman know this well of course, as the figure below is taken from his book ‘the Invisible Computer’, more than 10 years ago (1998).

spacer

At a granular level, Nielsen & Norman hit a few good chords by talking about consistency and lack of standards:

“…. the rush to develop gestural interfaces – “natural” they are sometimes called – well-tested and understood standards of interaction design were being overthrown, ignored, and violated. Yes, new technologies require new methods, but the refusal to follow well-tested, well-established principles leads to usability disaster”.

However, they also have to admit:

“The first crop of iPad apps revived memories of Web designs from 1993, when Mosaic first introduced the image map that made it possible for any part of any picture to become a UI element. As a result, graphic designers went wild: anything they could draw could be a UI, whether it made sense or not. It’s the same with iPad apps: anything you can show and touch can be a UI on this device. There are no standards and no expectations”.

Well, that is exactly the point: the first years of the web where pretty much the same – it took a few years for good patterns UI to spread and consolidate. It will happen the same thing in the mobile space.

Natural gestures

spacer

Another good chord is how to use natural gestures:
“In Apple Mail, to delete an unread item, swipe right across the unopened mail and a dialog appears, allowing you to delete the item. Open the email and the same operation has no result. In the Apple calendar, the operation does not work. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

With the Android, pressing and holding on an unopened email brings up a menu which allows, among other items, deletion. Open the email and the same operation has no result. In the Google calendar, the same operation has no result. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

Whenever we discus these examples with others, we invariably get two reactions. One is “gee, I didn’t know that.” The other is, “did you know that if you this (followed by some exotic swipe, multi-fingered tap, or prolonged touch) that the following happens?” Usually it is then our turn to look surprised and say “no we didn’t know that.”  This is no way to have people learn how to use a system”.

I agree with that; however, a good designer must know that a natural gesture must be considered as an ‘expert shortcut’, not as the only way to access that function:

–    In the Apple Mail example above, it is also possible to delete the email from within the email
–    In the Android example, holding and pressing brings up the ‘contextual menu’; Google explicitly says that it only be used as an ‘alternative way of accessing that function’ (it is the equivalent of the right button on a mouse).

Back button

spacer On the usage of back button in Android:

“In the Android, the back button moves the user through the activities stack, which always includes the originating activity: home. But this programming decision should not be allowed to impact the user experience: falling off the cliff of the application on to the home screen is not good usability practice. (Note too that the stack on the Android does not include all the elements that the user model would include: it explicitly leaves out views, windows, menus, and dialogs)”.

I totally agree on the above; the back/undo button has inconsistent behaviours across several applications and this should be addressed. But it is a ‘design execution’ problem and it will be addressed over time as soon as the development framework matures. Also, they miss the mark on Apple’s UX:

“Both Apple and Android recommend multiple ways to return to a previous screen. Unfortunately, for any given implementation, the method used seems to depend upon the whim of the designer. Sometimes one can swipe the screen to the right or downwards. Usually, one uses the back button. In the iPhone, if you are lucky, there is a labeled button.”

Well, Apple does strongly recommend the standard Back button at the top left and it is consistently used by the Apple’s signature applications. It is also well documented in the Apple HCI guidelines and most of (good) app developers seem to get the principle; ultimately it is the developers’ choice not to apply the ‘back button’ pattern.

Menus

The true advantage of the Graphical User Interface, GUI, was that commands no longer had to be memorized. Instead, every possible action in the interface could be discovered through systematic exploration of the menus.  Discoverability is another important principle that has now disappeared. Apple specifically recommends against the use of menus. Android recommends it, even providing a dedicated menu key, but does not require that it always be active. Moreover, swipes and gestures cannot readily be incorporated in menus: So far, nobody has figured out how to inform the person using the app what the alternatives are.

Menus can be good for feature discovery on a large screen, but they are not a viable alternative on such small screens. Quite recently, some of the most recent applications have pushed forward a ‘dashboard’ design pattern that allows users get an idea of the main features of the app (e.g. Facebook and Linkedin on iPhone, Twitter on Android). Again, it takes some time for good interaction design patterns to emerge and consolidate.

Conclusions

I feel that this essay is slightly off mark; Norman and Nielsen seem more interested in cashing their credibility in the ‘booming’ mobile & tablet user experience design market by firing their guns on the whole industry; however, they see the glass ‘half empty’ by not acknowledging the outstanding amount of value that the mobile revolution has brought to people in term of playful interactions, voice-based interaction and wayfinding in the physical world.

Just consider the following:

  1. With gestural and haptic interfaces, the interaction has become more playful; people engage with this devices in a completely different way from desktops, and as a result the level of emotional attachment to the devices is different and users develop a much more intimate relationship with their devices and apps.
  2. Search becomes contextual on a mobile device; the phone browser knows the user location and can provide location-based results.
  3. Consequently, mobile maps become an extremely powerful to ‘augment’ our understanding of the environment: users can search or browse for shops, goods, people, recommendations near them in a quick and intuitive way.
  4. The usage of voice become an alternative way to interact with the device; thanks to Google and Could Computing, we can now speak to our phones and get an answer in return.

The three years 2008/2010 mark another phase of the Internet revolution in how people use technology to mediate their relationship with their environment and social networks. In the next years, we will witness the further expansion of this revolution to less technological-savy users: some people will experience Internet for the first time through their mobile phones.

spacer
This entry was posted in Mobile UX on by Giorgio Venturi.