Links on Usability Engineering

Fill Your Portfolio With Stories

On the trail of exploring our next career move, it’s likely we’ll need to show the path we’ve been on. As part of a design team, that usually means displaying our work.

However, if we didn’t make proper arrangements before we took the job, it’s very likely we can’t show much of our work to anyone. Consultants, contractors, and full-time employees are usually covered (in the US at least, but most other places as well) by a “work for hire” agreement, which means that the people we work for own all the work product we produce.

Wireframes, sketches, and other deliverables are not ours to show. If the final design isn’t publicly visible, such as internal application, there might not be any evidence of what we’ve done.

This puts us in an uncomfortable position when it comes time to show our work to a prospective employer. How do we show what we’re capable of when we don’t have access to our work? What can you put into your portfolio when your work is all locked up? The simple answer: Fill your portfolio with stories.

Fill Your Portfolio With Stories

234 Tips and Tricks for Recruiting Users as Participants in Usability Studies

A well-managed recruiting program at an organization allows teams to quickly find quality participants for usability studies.

This free 190-page report from the Norman Nielsen group gives you 234 guidelines on how to set up and manage a recruiting program. It also presents advice on when to outsource to a recruiting agency and when to use in-house recruiting.

Topics covered

  • Learn how to set up and manage a recruiting program to get the right users for usability studies
  • Know when it’s appropriate to outsource to a recruiting agency or use in-house recruiting
  • Planning for recruiting
    • Recruiting criteria
    • Incentives
    • Going to participants vs. having them come to you     
    • Screening script and questionnaire      
  • Screening and scheduling participants
    • Recruiting on your own
    • Working with an outside recruiting agency      
    • Reusing Participants
  • Running the test sessions
    • Ensuring participants’ safety, privacy, and physical comfort    
    • Preparing session materials        
    • Dealing with unqualified participants     
  • Building and maintaining a participant database and recruiting staff   
  • Sample scripts and forms

234 Tips and Tricks for Recruiting Users as Participants in Usability Studies (1.3 mb, PDF)

No More No Shows — How to Make Sure Your Research Participants Actually Show Up

“No shows” stink. A few startups recently complained to the author that after diligently planning UX studies and recruiting a great batch of customers, some of their participants just didn’t show up. That’s incredibly frustrating, can be embarrassing in front of the team, and wastes everyone’s time.

Here are a few habits that have dramatically reduced “no shows” at the author’s studies:

  1. Avoid scheduling interviews on Mondays or immediately before or after holidays
  2. Offer an incentive that’s big enough to motivate people to show up
  3. Don’t start recruiting too far in advance
  4. Send recruits clearly written confirmation emails
  5. If parking is difficult in your neighborhood, give them specific instructions and assistance
  6. Ensure all communication (phone calls, emails, etc.) to your participants is respectful, professional, and organized
  7. Warn recruits ahead of time that the sessions will be 1-on-1 interviews
  8. Call participants to remind them about their appointments the day before
  9. Elicit several responses from your recruits in the days leading up to the study

No More “No Shows” — How to Make Sure Your Research Participants Actually Show Up

Pitting Usability Testing Against Heuristic Review

Consider this scenario: You are managing the Intranet applications for a large company. You’ve spent the last year championing data-driven (re-)design approaches with some success. Now there is an opportunity to revamp a widely used application with significant room for improvement. You need to do the whole project on a limited dollar and time budget. It’s critical that the method you choose models a user-centered approach that prioritizes the fixes in a systematic and repeatable way. It is also critical that the approach you choose be cost-effective and convincing. What do you do?

Independent of the method you pick, your tasks are essentially to:

  • Identify the problems
  • Prioritize them based on impact to use
  • Prioritize them based on time/cost benefits of fixing the problems
  • Design and implement the fixes
  • In this situation, most people think of usability testing and heuristic (or expert) review. Empirical evaluations of the relative merit of these approaches outline both strengths and drawbacks for each. Usability testing is touted as optimal methodology because the results are derived directly from the experiences of representative users… The tradeoff is that coordination, testing, and data reduction adds time to the process and increases the overall man- and time-cost of usability testing… As such, proponents of heuristic review plug its speed of turnaround and cost-effectiveness… On the downside, there is broad concern that the heuristic criteria do not focus the evaluators on the right problems (Bailey, Allan and Raiello, 1992). That is, simply evaluating an interface against a set of heuristics generates a long list of false alarm problems. But it doesn’t effectively highlight the real problems that undermine the user experience.

    There are many, many more studies that have explored this question. Overall, the findings of studies pitting usability testing against expert review, lead to the same ambivalent (lack of) conclusions.

    Pitting Usability Testing Against Heuristic Review (Link leads to a cached Google page since the original link is dead, good piece of content none the less)

    Severity Ratings for Usability Problems

    Severity ratings can be used to allocate the most resources to fix the most serious problems and can also provide a rough estimate of the need for additional usability efforts. If the severity ratings indicate that several disastrous usability problems remain in an interface, it will probably be unadvisable to release it. But one might decide to go ahead with the release of a system with several usability problems if they are all judged as being cosmetic in nature.

    The severity of a usability problem is a combination of three factors:

    • The frequency with which the problem occurs: Is it common or rare?
    • The impact of the problem if it occurs: Will it be easy or difficult for the users to overcome?
    • The persistence of the problem: Is it a one-time problem that users can overcome once they know about it or will users repeatedly be bothered by the problem?

    Severity Ratings for Usability Problems

    The User-Reported Critical incident Method for Remote Usability Evaluation

    Because of this vital importance of critical incident data and the opportunity for users to capture it, the over-arching goal of this work is to develop and evaluate a remote usability evaluation method for capturing critical incident data and satisfying the following criteria:

    • tasks are performed by real users
    • users are located in normal working environments
    • users self-report own critical incidents
    • data are captured in day-to-day task situations
    • no direct interaction is needed between user and evaluator during an evaluation session
    • data capture is cost-effective
    • data are high quality and therefore relatively easy to convert into usability problems

    Several methods have been developed for conducting usability evaluation without direct observation of a user by an evaluator. However, none of these existing remote evaluation methods (nor even traditional laboratory-based evaluation) meets all the above criteria. The result of working toward this goal is the user-reported critical incident method, described in this thesis.

    The User-Reported Critical incident Method for Remote Usability Evaluation (PDF, 1.8 MB)

    Preference and Desirability Testing: Measuring Emotional Response to Guide Design

    An important role of visual design is to lead users through the hierarchy of a design as we intend. For interactive applications, a sense of organization can affect perceived usability and, ultimately, users’ overall satisfaction with the product.

    What stakeholders should be able to say is, “We should go with design C over A and B, because I feel it evokes the right kind of emotional response in our audience that is closer to our most important brand attributes.”

    Opinion- There Is No Mobile Internet

    It’s time to stop thinking about the Internet and online communication in the context of a device, be it desktop, tablet or mobile. Advances by Google and Apple have heightened consumer expectations, which now require stricter focus from us to create seamless online communications — communications that work everywhere and that get their point across. We need to embrace a device-agnostic approach to communicating with connected consumers and forget the idea of a “mobile Internet”. There is only One Web to experience.

    There Is No Mobile Internet

    The Mobile Playbook from Google

    Mobile is more central to business success than ever before. Most executives know this, but they get hung up on exactly what to do and how to do it.

    Google’s now second edition of The Mobile Playbook offers the latest best practices and strategies for winning in mobile, like how to address the price transparency challenge and face showrooming head on, the age-old question of when to build a mobile website and when to build a mobile app, and what it really means to build multi-screen marketing campaigns.

    The Mobile Playbook

    Usability and User Experience Surveys

    According to Perlman (2009), “Questionnaires have long been used to evaluate user interfaces (Root & Draper, 1983). Questionnaires have also long been used in electronic form (Perlman, 1985). For a handful of questionnaires specifically designed to assess aspects of usability, the validity and/or reliability have been established, including some in the [table below].”

    This wiki has a list of generic usability survey instruments that can be adapted to specific websites. Often, it is good enough to replace the word “system” by “web site”. There are more than 15 questionnaires listed here.

    Usability and user experience surveys