Consider this scenario: You are managing the Intranet applications for a large company. You've spent the last year championing data-driven (re-)design approaches with some success. Now there is an opportunity to revamp a widely used application with significant room for improvement. You need to do the whole project on a limited dollar and time budget. It's critical that the method you choose models a user-centered approach that prioritizes the fixes in a systematic and repeatable way. It is also critical that the approach you choose be cost-effective and convincing. What do you do?
Independent of the method you pick, your tasks are essentially to:
In this situation, most people think of usability testing and heuristic (or expert) review. Empirical evaluations of the relative merit of these approaches outline both strengths and drawbacks for each. Usability testing is touted as optimal methodology because the results are derived directly from the experiences of representative users… The tradeoff is that coordination, testing, and data reduction adds time to the process and increases the overall man- and time-cost of usability testing… As such, proponents of heuristic review plug its speed of turnaround and cost-effectiveness… On the downside, there is broad concern that the heuristic criteria do not focus the evaluators on the right problems (Bailey, Allan and Raiello, 1992). That is, simply evaluating an interface against a set of heuristics generates a long list of false alarm problems. But it doesn't effectively highlight the real problems that undermine the user experience.
There are many, many more studies that have explored this question. Overall, the findings of studies pitting usability testing against expert review, lead to the same ambivalent (lack of) conclusions.
Pitting Usability Testing Against Heuristic Review (Link leads to a cached Google page since the original link is dead, good piece of content none the less)