Links on Usability Engineering

How Pocket Built a Research Lab for Mobile App Testing in Just a Few Hours

You’re ready to run a user study for your product. You’ve learned how to recruit participants, write an interview guide, interview people, and summarize results. But there’s just one problem: you don’t have access to a research lab. Learn how Pocket built a lightweight research lab for mobile app testing in their office.

How Pocket Built a Research Lab for Mobile App Testing in Just a Few Hours

Questionnaires in Usability Engineering- A List of Frequently Asked Questions

The list on this page is a compilation of the questions the author has gotten on the use of questionnaires in usability engineering. Questions include:

  • What is a questionnaire?
  • Are there different kinds of questions?
  • What are the advantages of using questionnaires in usability research?
  • What are the disadvantages?
  • How do questionnaires fit in with other HCI evaluation methods?
  • What is meant by reliability?
  • What is meant by validity?
  • Should I develop my own questionnaire?
  • What’s wrong with putting a quick-and-dirty questionnaire together?
  • Factual-type of questionnaires are easy to do, though, aren’t they?
  • What’s the difference between a questionnaire which gives you numbers and one that gives you free text comments?
  • Can you mix factual and opinion questions, closed and open ended questions?
  • How do you analyse open-ended questionnaires?
  • What is a Likert-style questionnaire? One with five response choices to each statement, right?
  • How can I tell if a question belongs to a Likert scale or not?
  • How many response options should there be in a numeric questionnaire?
  • How many anchors should a questionnaire have?
  • My respondents are continually complaining about my questionnaire items. What can I do?
  • What other kinds of questionnaires are there?
  • Should favourable responses always be be checked on the left (or right) hand side of the scale?
  • Is a long questionnaire better than a short one? How short can a questionnaire be?
  • Is high statistical reliability the ‘gold standard’ to aim for?
  • What’s the minimum and maximum figure for reliability?
  • Can you tell if a respondent is lying?
  • Why do some questionnaires have sub-scales?
  • How do you go about identifying component sub-scales?
  • How much can I change wordings by in a standardised opinion questionnaire?
  • What’s the difference between a questionnaire and a checklist?
  • Where can I find out more about questionnaires?

Questionnaires in Usability Engineering- A List of Frequently Asked Questions

Five Critical Quantitative UX Concepts

As UX continues to mature it’s becoming harder to avoid using statistics to quantify design improvements… Here are five of the more critical but challenging concepts. The author didn’t just pick some arbitrary geeky stuff to stump math geeks (or get you an interview at Google). These are fundamental concepts that take practice and patience but are worth the effort to understand.

  1. Using statistics on small sample sizes: You do not need a sample size in the hundreds or thousands or even above 30 to use statistics. The author regularly compute statistics on small sample sizes (less than 15) and find statistical differences.
  2. Power: Power is sort of like the confidence level for detecting a difference—you don’t know ahead of time if one design has a higher completion rate than another.
  3. The p-value: The p-value stands for probability value. It’s the probability the difference you observed in a study is due to chance.
  4. Sample Size: Sample size calculation remains a dark art for many practitioners. There are many counterintuitive concepts, including power, confidence and effect sizes. One complication is that there are different ways to compute sample size. There are basically three ways to find the right sample size for just about any study in user research- problem detection, comparing and precision.
  5. Confidence intervals get wider as you increase your confidence level: The “95%” in the 95% confidence interval you see on my site and in publications is called the confidence level. A confidence interval is the most plausible range for the unknown population mean. But you can’t be sure an interval contains the true average. By increasing the confidence level to 99% the author makes their intervals wider. The price for being more confident is that they have to cast a wider net.

Five Critical Quantitative UX Concepts

Usability and Customer Loyalty- Correlation Between NPS and SUS

We all want higher customer loyalty, so knowing what “levers” move the loyalty-needle is important. If you can make changes that will increase loyalty, then increased revenue should follow. So, do improvements in usability increase customer loyalty?

To find out, Jeff Sauro took one of the more popular measures of perceived usability, the System Usability Scale (SUS) [PDF] and performed a regression analysis against Net Promoter scores. In total, he examined responses from 146 users from about a dozen products such as rental car companies, financial applications and websites like Amazon.com. The data come from both lab-based usability tests and surveys of recent product purchases where the same users answered both the SUS and Net Promoter question.

He found that the Net Promoter score and SUS have a strong positive correlation of .61, meaning SUS scores explain about 36% of the variability in Net Promoter Scores. The regression equation is:

NPS = 0.52 + 0.09(SUS)

So a SUS score of a 70 will generate an approximate Net Promoter Score of about a 7 and a SUS score of at least an 88 is needed to be a promoter (9+).

Does Better Usability Increase Customer Loyalty? Correlation Between the Net Promoter Score and the System Usability Scale (SUS)

Breaking Down the Silos: Usability Practitioners Meet Marketing Researchers

Being a consultant with experience in both traditional marketing research and user experience and usability gives the author a unique perspective on a broad range of issues relating to customer experience. Not only does he have a good idea of what the other discipline does, he is also a practitioner of the other discipline.

However, in attempting to play both roles at once, he often finds that client companies keep these two disciplines locked up in separate silos—usability research within IT and marketing research within the Marketing Services department. This can have a serious impact on the sharing of information relating to customer experience.

Breaking Down the Silos: Usability Practitioners Meet Marketing Researchers

Will Ford learn that software isn’t manufactured?

A recent article in the New York Times discusses Ford’s plummeting fall in user rankings this year, focusing the blame on their new touch screen interface. According to the article, J.D.Power, the auto industry arbiter, dropped Ford’s ranking from 5th to 23rd, and subsidiary Lincoln’s ranking from 8th to 17th place.

J.D.Power acknowledges that both Ford and Lincoln’s fit and finish are excellent. It was the “annoying” behavior of their driver-facing interactive systems that caused their ratings to plummet. Other reviewers concur, as Consumer Reports yanked their “Recommended” rating from Ford’s new 2011 Edge model.

… Digital solutions are so much cheaper and more flexible than mechanical ones that they will eventually come to dominate the entire company. Companies who can master the challenge of software’s unique nature, and particularly of how humans interact with it, will thrive. Ford is learning the opposite lesson.

Will Ford learn that software isn’t manufactured?

High Paying Jobs in User Experience Design

Here are top paying jobs for Information Architecture, Usability, and UX practitioners plus reasons to explore each for your user experience design career – and bank your account. (Salary figures based on Indeed.com and GlassDoor.com data)

  • User experience strategist: $67,000 to $135,000
  • Usability analyst: $81,000 on an average
  • User interface designer: $84,000 to $155,000
  • Interaction designer: $91,000 on an average
  • Interaction designer: $91,000 on an average
  • Information architect: $104,000 on an average

High paying jobs in User Experience design

Usability Questionnaires: If you could only ask one question, use the SEQ (Single Ease Question)

Was a task difficult or easy to complete? Performance metrics are important to collect when improving usability but perception matters just as much.

Asking a user to respond to a questionnaire immediately after attempting a task provides a simple and reliable way of measuring task-performance satisfaction. Questionnaires administered at the end of a test such as SUS, measure perception satisfaction.

There are numerous questionnaires to gather post-task responses. The SEQ (Single Ease Question) is a new addition.

Usability Questionnaires: If you could only ask one question, use the SEQ (Single Ease Question)

Usability as Common Courtesy

Steve Krug says that usability is about building clarity into web sites: making sure that users can understand what it is they’re looking at — and how to use it — without undue effort. Is it clear to people? Do they “get it”?

But there’s another important component to web usability: doing the right thing — being considerate of the user. Besides “Is my site clear?” you also need to be asking, “Does my site behave like a mensch?”

Usability as Common Courtesy

Response times- the 3 important limits

The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]:

  1. 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
  2. 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
  3. 10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Response times- the 3 important limits