19
Oct
14

HCI learning, a day analyzing user experience, and thoughts about remote usability testing

by Steve

 My membership to the Usability Professional’s Association went through this week (although disappointingly I have to wait a whole 4-5 weeks for my Designing The User Experience poster), and to celebrate I went to the UX Brighton event (‘Remote User Research – A 360˚ View’ ), and met the head of the UK Chapter of the UPA, Claire Mitchell (small world!).  I’ve written more about this at the end of this blog post, but it’s a bit epic, so I’ll cover everything else first!

spacer

a paper mockup of the T1000

 This week in HCCS, we’ve been learning about the process making of paper mockups (mostly scissors and sticky back plastic!), and the advantages (quick, manages user’s expectations, gives the opportunity to hide in a box and pretend to be a robot).

This has been supplemented by the (rather dull) course text book by Dix ‘Human Computer Interaction’. Dix tells us about the ways to input information into a human (sight, touch, sound, smell etc. ), how it’s stored (sensory input, short and long term memory – needs more ram!), and our limitations (we can only remember around 7 chunks at a time – a factor in Tetris’s success!). When amazon can be bothered to deliver it, I’ve ordered Alan Cooper’s (The Inmates are Running the Asylum), which should be a more interesting read.

 

 The design complaint I contributed this week was Amazon’s log in link being “Sign in to get personalised recommendations” (with the sign in link being on the personalized recommendations text).

 

spacer

a design mistake?

As documented in Krug’s Don’t Make Me Think most users will ‘scan’ a page rather than read the full text, looking for buttons or links which do the task they are looking for. As someone looking to sign in, my ‘scan’ would reject this link as a) ‘Sign In’ isn’t the link and b) You’d assume the link would take you to personalised recommendations, not the sign in page. However, as we discussed in class, Amazon do a lot of A&B testing (running two versions of the page concurrently with slight differences, to see which ones get the most successful ‘goal completion’ rate). Therefore we have to imagine that this has been a conscious choice by amazon, either because more people are looking for personalized recommendations that to log in, or because it increases customer’s awareness of this feature.

 We’ve been given the task of logging our experiences with technology through a day, and considering them from a design point of view. That’s what you lucky people are in for now! (hold on tight, its ranty!)

 Waking up:

Alarm Clock – Hit snooze (big button on top, good design feature). Turned it off by turning the radio on and off. Design fail – I imagine there’s an ‘official’ way to turn the alarm off, but in ten years of use, I’ve never found it.

spacer

design success - you wont fall asleep with this alarm clock near you!

iPhone – ran out of battery last night, and I left the plug of the charger at my parents, so has to charge off USB. Plugged it into my work laptop to charge, but the USB only charges when the laptop is open (not in standby!) Design fail – annoying that I have to have the laptop open to charge my phone.

 TV – is quite old, and turns on to the analogue channels, rather than the scart input. We have cable, so it only ever uses scart. I guess it should auto detect whether theres analogue or scart data being fed in, and select which to show automatically. Design fail - It doesn’t though.

Employment fun:

Laptop – Backlights failed on the screen, so have to take it into work to get it replaced. Replacement has no battery life, so won’t survive unplugged. Design fail – Laptops too frail for my clumsy ways.

 Successfully got to my desk with the new laptop, and charged my phone with no design issues!

 IP Phone – I don’t understand it. It says I have a missed call, but no details of when/who/what. Red light is lit up on handset, I cant recall whether its always been like that. Later in the day it tells me I have a voicemail, with a flashing envelope icon. I lift up the receiver, and press the button next to the flashing icon. Nothing happens. I try again with the receiver down, the phone beeps at me. I lift up the receiver, and try other things. The button marked messages does it. It asks for a pin. I have no idea, but am logged into the phone, so it should know its me already, right? Eventually find my registration email with a voicemail pin. Successfully retrieve voicemail. Design fail – too many to count.

 Coffee machine – I’ve worked this out now, but it took a short amount of observation when I joined. Its next to a pile of cups. Do you need to put the cup in the machine before selecting the drink? If so, where? (turns out, for all of you who are worrying, that it doesn’t need any cups, it automatically gives you one) Design fail – Not clear how to load/use initially.

 Home time:

 Sky+ – I’m not particularly familiar with Sky+, so it’s a learning experience… Design fail – Everytime you return to the TV guide, it goes to the start of the list!

 Book – papercut! Ow! Design fail – paper should be replaced with some sort of foam.

 What a busy day!

 

 

My impressions of the UX Brighton event

The Remote User Research – A 360˚ View event was in the Old Music Library, which although lacking in heating and lighting, does have a lot more scary art than most venues. Free beer was generously supplied by the sponsors, which starts the night off on a good foot. The topic of the evening was performing remote usability testing, with talks given by Feralabs, Ethnolabs, Pidoco, and Flow.

The first three talks were presentations of technology the companies had developed. Ethnolabs have produced an API which collects data on specifically tagged topics from feeds such as twitter, social network sites and email correspondents, which can then be used to correlate user experiences. The example they used to demonstrate this was people’s impressions of a new digital camera. Although their API technology seemed functional, I was under whelmed by their product – although the piecemeal opinions of users aren’t useless, I think that without specific tasks to try to achieve, or interview questions being asked, it’d be hard to achieve any standardized conclusions from the data.  Also I’d question what incentives would be offered to the user’s to bother to tweet their opinions – surely without an incentive causing every user to tweet, the data retrieved will be rather biased to the polarized views (“I hate this!”).

The second talk was by pidoco, and was about their collaborative wireframing tool. The technology here did impress me, and I can see the use in immediately being able to adjust and present new wireframes to a client remotely (the system also logged voice, so longer suggestions could be reviewed later). The artistic style of the wireframes imitated pencil sketches, rather than the precise lines you’d get in omnigraffle, which is also helpful in managing client’s expectations. I know before I’ve presented wireframes that look precise, and the client has spent along time reviewing minor items like the text within it. Pidoco’s tools’ emphasis on a rough sketch aesthetic would help manage situations like this!

The last two talks were slightly linked – a presentation of a remote data logging tool by Feralabs, which gives users tasks to complete and logs their precise experience in doing this, and a report by Flow on their experiences using this tool. The tool seemed effective, logging the user’s navigation, mouse clicks, and asking them questions after, and Flow’s review was interesting, and sold the idea to me. I would defiantly consider using a logging technology like this in performing certain kinds of usability testing. In the heated Q&A session after, it was discussed at length that this should be used in conjunction, and not instead of face to face interviews, for it was agreed that remote usability studies cannot log or reproduce every element of a close personal study, you fail to see the emotions and reactions of the participant involved, and it’s harder to adapt the test to study interesting emerging behaviour variants. However, it is cheaper, and I know the business side of most organisations will like the sound of that!

If you found this interesting, you may like:

  1. Evaluating existing technologies, paper prototypes in action, Windows 7 and the disappointing user experience of my DVD player!
  2. The User Experience of waiting for the bus
  3. Remote Research – Book Review
  4. Usability Thoughts – Mass Effect
  5. No user testing? Oops! – The Digiscent iSmell
Enjoyed reading this post?
Subscribe to the RSS feed and have all new posts delivered straight to you.
14 Comments:
  1. spacer
    Sam 19 Oct, 2009

    If you find Dix’s book dull, I would recommend “Interaction Design: Beyond Human-Computer Interaction” as an excellent broad (perhaps too much so) overview.

  2. spacer
    Tony Tulathimutte 19 Oct, 2009

    Hi Steve, very detailed post! I work for a remote user research firm (Bolt | Peters in San Francisco), and I’m currently co-writing a book about remote UX methods with Nate Bolt.

    We hear concerns about the “facial expressions” issue from clients all the time, as a supposed shortcoming of remote methods. It’s our humble opinion that facial expressions aren’t really necessary for doing good user research, because we’ve found that when it comes to collecting findings and insights about how to improve the interface, it’s the users’ onscreen behaviors you want to pay attention to. Facial expressions can help you understand how the user is feeling in a general way (frustrated, surprised, confused), but there are also lots of other ways to get that information—most people are generally adept at conveying these feelings in their tone of voice, for example.

    There are exceptions of course—in interfaces where the user’s feelings play a central role (e.g. video games) you’d want to keep close tabs on facial expression. We did a study for EA’s Spore in 2008, for which we did a crazy elaborate setup to monitor and record the users’ expressions, while communicating with them quasi-remotely. (Links below if you’re curious.)

    As for remote research being “cheaper”, we actually find that that’s not necessarily the case. You can usually save some minor costs on travel and recruiting, but usually the main expense is the moderator and researchers’ time, which is comparable to that of a normal study. In our view, cost-cutting isn’t the main reason to do remote research—it’s the ability to see people working on their own computers, just as they’re about to perform a real task you’re interested in watching.

    Links!:

    Boxes and Arrows article: www.boxesandarrows.com/view/researching-video

    Our blog post (w/ videos): boltpeters.com/blog/how-bp-researched-spore/

  3. spacer
    Steve Bromley 20 Oct, 2009

    Hi Sam, thanks for the recommendation.
    Maybe i’m a tad unfair on Dix, Allen et. al, its not really that dull, its just a tad epic, and so takes alot to get through each weeks chapters! will definately check out your recommendation though.

  4. spacer
    Steve Bromley 20 Oct, 2009

    Hi Tony
    I read your article and blog post, it was very interesting (particularly enjoyed the video of player’s reactions to spore!).
    You have an interesting point that the player’s expressions of frustration/enjoyment that we are looking for in facial expressions can be accurately depicted by tone of voice. I guess an issue is the user’s familiarity with ‘thinking aloud’ when performing tasks – maybe some participants wouldn’t do this naturally, and some data may be lost that facial analysis would notice. As you say, the context is important though.

    I should also clarify about the ‘cheapness’ of remote research. I believe (but didn’t mention) the webnographer tool offers a degree of analysis aswell. Their tool also doesnt require moderation, working from the user’s browser in their own time (a key advantage highlighted in their presentation was ‘ability to work while on holiday’). So maybe in this case it would be cheaper – and as you say gives a more accurate reflection of the user experience from their own computers.

    Thanks for checking out the blog!

  5. spacer
    Ofer Deshe 21 Oct, 2009

    Thank you for covering the event well. I do agree with you that by simply monitoring Tweeter without well-defined research questions, tasks and incentives it could be hard to achieve valid conclusions. However, I thought that our main example was showing how people were given specific tasks related to using a specific type of camera in different contexts. They then uploaded their impressions and satisfaction level into our tool answering specific questions. By the way they were also given an incentive. Our tool uses a dedicated iPhone application, a WAP/.mobi and a web-based diary. All of which are configurable and allow for specific interview questions to be asked and linked to structured tasks. The key is the capture of data at the point of experience, which could be whilst completing a task. The EthnoLabs app also simplifies data analysis by using qualitative analysis tools that implement a number of algorithms.

    I am surprised by the fact that you decided to only focus our API and on Twitter, which was an example for a feed that we have as an additional way to monitor trends and brand related conversations.

  6. spacer
    Steve Bromley 21 Oct, 2009

    Hi Ofer, thanks for taking a look!

    My issue wasn’t just with monitoring twitter, it was due to the unmonitored nature of the tasks – i think that asking people to log their impressions of using the camera after the event would produce a different range of responses to their impressions during the user experience. For example a bad experience downloading the pictues of a camera may cloud the users opinion of every aspect of the camera, if asked about it later when writing their web diary.

    I think you’ve addressed this though, as you said in your comment above, by developing things such as the WAP interface, and the iPhone app – these would allow the user to log their experience while using the camera, and avoid losing valuable user experience data.

    Let me know when the website is live, and i’ll be sure to check it out in more detail and link to it!

  7. spacer
    Jack Josephy 21 Oct, 2009

    Nice post Steve. I have actually been using Pidoco to help with a client project. The tool is pretty intuitive and has about a half hour learning curve, though it seemed to occasionally suffer from a bit of lag due to the fact it works online. But I’m all up for these tools that make UX/web-design easier and I do believe remote testing often captures the natural user-system interaction more precisely than live testing, particularly in terms of actual usage data.

    Keep up the good work

  8. UX Brighton Round-up | Remote Usability 21 Oct, 2009

    [...] Steve Bromley: In the heated Q&A session after, it was discussed at length that this should be used in conjunction, and not instead of face to face interviews, for it was agreed that remote usability studies cannot log or reproduce every element of a close personal study, you fail to see the emotions and reactions of the participant involved, and it’s harder to adapt the test to study interesting emerging behaviour variants. [...]

  9. spacer
    Tony Tulathimutte 21 Oct, 2009

    Hey Steve—you’re right, Webnographer (and other automated services like Loop11, Usabilla, Webeffective, UserZoom, etc) do provide a way to get around moderating expenses, but those studies address different issues: automated research is generally good for large-sample, quantitative answers to very specific and pre-defined tasks.

    Moderated research, on the other hand, is good for obtaining qualitative behavioral feedback in a rich usage context—you can see all the different tools and websites and workflows and even physical artifacts the participant is using to get his task done. It helps you understand the motivation of the task, not just the outcome. And on top of that, you can see completely unexpected behaviors come up, ones that you didn’t plan for during testing, and that can’t really happen in automated testing.

    That’s just a brief overview of those issues, and of course there are ways to get quantitative data from moderated studies and some qualitative data from automated ones. We deal more with those issues in our book, which is coming out soon. Check it out if you’re interested: www.rosenfeldmedia.com/books/remote-research/

  10. spacer
    Steve Bromley 23 Oct, 2009

    Really interesting points Tony, and will definately check out the book. Good promotion! spacer

  11. spacer
    Software 1 Nov, 2009

    Another great post.
    Thank you for the information, Its good to see such quality posts.
    Im subscribing to your blog.
    Keep them comming.

    Classified Posting

  12. spacer
    Amanda McNeill 5 Nov, 2009

    Hi Steve,

    Nice post. I definitely enjoy your sense of humor. And Feralabs sounds interesting.

    Here is an article from Website Magazine bit.ly/32mqlQ that reviews usability tools (remote and moderated) as well as usertesting.com, who I am affiliated with.

    Amanda

  13. spacer
    WP Themes 1 Feb, 2010

    Amiable dispatch and this enter helped me alot in my college assignement. Gratefulness you on your information.

  14. spacer
    sad 2 Sep, 2010

    Nice post. I definitely enjoy your sense of humor. And Feralabs sounds interesting.

Post your comment

Spam protection by WP Captcha-Free




gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.