background preloader

Usability Testing

Facebook Twitter

Testing design: Testing users impressions of a design. Design: The estimated time to read this article is 6 minutes It would be wrong to have a series on design methodology without looking at the subject of testing design.

Testing design: Testing users impressions of a design

One of the fundamental principles behind Headscape’s design approach is to design with data. Design is subjective. What I like you may well hate and vice versa. The normal route for resolving this problem is to focus on what the user wants, rather than what the client or designer thinks is best. The mobile testing challenge: How to improve your UX and prepare for the future. It’s one of the biggest headaches for mobile developers and organizations launching mobile initiatives, and one where the most capital can be wasted: mobile testing.

The mobile testing challenge: How to improve your UX and prepare for the future

SUM: Single Usability Metric. (Presented at CHI 2005) Jeff Sauro • April 17, 2005 SUM is a standardized, summated and single usability metric.

SUM: Single Usability Metric

It was developed to represent the majority of variation in four common usability metrics used in summative usability tests: task completion rates, task time, satisfaction and error counts. The theoretical foundations of SUM are based on a paper presented at CHI 2005 entitled "A Method to Standardize Usability Metrics into a Single Score. " Usability ScoreCard The UsabilityScorecard web-application will take raw usability metrics (completion, time, sat, errors and clicks) and calculate confidence intervals and graph the results automatically.

SUM Calculator. Quantitative Usability - Papers. Scene 1: DAY, INTERIOR, HALLWAY You: Hey, I just heard our last usability test of the redesigned shopping cart page only found a couple problems.

Quantitative Usability - Papers

And we can fix those in the next day or so. We're almost ready to roll. Marketing: Great. What percentage of your test subjects were able to actually get through the shopping cart? You: Well, I don't have the details, but my buddy said we had a success rate of 80%. Marketing: Great. You (smiling): Cut me a check while you're there. Scene 2: DAY, INTERIOR, MARKETING OFFICE.

Margins of Error in Usability Tests. The 20/20 Rule of Precision Jeff Sauro • August 6, 2009 How many users will complete the task and how long will it take them?

Margins of Error in Usability Tests

If you need to benchmark an interface, then a summative usability test is one way to answer these questions. Summative tests are the gold-standard for usability measurement. But just how precise are the metrics? Just as a presidential poll uses a sample to estimate outcomes for the entire population, usability tests also estimate the population task time and completion rate from a sample of users.

The margin of error is half the width of the confidence interval and the confidence interval tells us the likely range the population mean and proportion will fall in. To find out, I examined a large set of data from an earlier analysis Jim Lewis and I conducted. Plotting Likert Scales. Confidence Interval Calculator for a Completion Rate. Jeff Sauro • October 1, 2005 Use this calculator to calculate a confidence interval and best point estimate for an observed completion rate.

Confidence Interval Calculator for a Completion Rate

This calculator provides the Adjusted Wald , Exact , Score and Wald intervals. Download this calculator in an excel file Explanation The Adjusted Wald method should be used almost all the time. Adjusted Wald Method. Templates. 8 Tips for Doing Usability Testing at a Fast-Paced Company. At HubSpot, we have one of the fastest development teams around.

8 Tips for Doing Usability Testing at a Fast-Paced Company

Our dev team continuously deploys code, up to 100 times per day, so our product is constantly changing. This leads to several challenges for us on the UX team, whose job it is to ensure that the software is easy and enjoyable to use. What is Ad Hoc Testing. What exactly is Ad Hoc Testing?

What is Ad Hoc Testing

When Will you use Ad Hoc Testing ? Ad-hoc testing is an unscripted software testing method. Often, people confuse it with the exploratory, negative and monkey testing. In the domain of software testing, the word “ad-hoc” means that the test is for a particular purpose at hand only. You cannot use the same test another time. The following characteristics will provide the real meaning of ad-hoc testing: Ad-hoc testing is a random and unscripted test, performed without a formal test plan, procedures or documentation of results. During 1990’s, ad-hoc testing experienced a bad reputation, for people presumed that, it was a careless way of testing. If you conduct, a test only once in the series of different tests, then you can call it as ad-hoc test.

Ad-Hoc Usability Testing. We at MAYA have been interested for a while about the differences between usability tests where the tasks are well-defined beforehand and those that use a looser structure; where the user has greater autonomy to explore the interface or the product.

Ad-Hoc Usability Testing

Observing the user while they’re allowed to explore a system on their own has merit — after all, they won’t have a usability test moderator telling them what task to do next when they’re using the system in earnest. On the other hand, if there’s no structure to the test, a participant may not encounter many areas of the user interface, or it may take more users to get complete coverage of a system. It’s also hard to make objective measurements (error rates, time-on-task) if the tasks are generated in an ad-hoc fashion — not only will the tasks not be known by the moderator, but each participant will have a different task set.

Usability and user experience surveys. This article or section is a stub.

Usability and user experience surveys

A stub is an entry that did not yet receive substantial attention from editors, and as such does not yet contain enough information to be considered a real article. In other words, it is a short or insufficient piece of information and requires additions. 1 Introduction. Basics of Website Usability Testing. “It takes only five users to uncover 80 percent of high-level usability problems”, – Jakob Nielsen. Emergence of Usability Testing In the beginning, there was an ocean liner. It was in the late 1940s. Henry Dreyfuss designed the state rooms for ocean liners “Independence” and “Constitution” and installed them in a warehouse. Developer Watching Usability Test. An Introduction To A/B Testing. A/B Testing isn’t a new term. In fact, many canny marketers and designers are using it at this very moment to obtain valuable insight regarding visitor behavior and to improve website conversion rate.

Unfortunately, A/B Testing still remains in the dark for most online marketers and web designers. The technique is still underrated as opposed to comparably valued methods like SEO. Testing Lab Diagram. A/B Testing Diagram. What is Usability Testing? Advanced Common Sense - Steve Krug's Web site. Usability testing. Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users. Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are food, consumer products, web sites or web applications, computer interfaces, documents, and devices.

Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.