Choosing a Sample Size
How many participants do you need for your study? This is your “sample size”.
Understanding your Question
The answer depends on the question you want to ask! In general there are two types of question: Qualitative and Quantitative. Qualitative questions are about digging into individual user experiences and behaviour. You’ll need to put yourself in the viewer’s shoes to get the insight you need to answer these questions.
Quantitative questions are about hard numbers. For example, what percentage of users look at the hero image before the side-bar? Some more examples:
- Why don’t users checkout after adding to the basket? [Qualitative]
- How many users see my promotional offer? [Quantitative]
Research shows that qualitative analyses to look for specific design problems, or unexpected behaviour, can be answered with as few as 5-10 views. In a qualitative analysis, you only need to watch one viewer become confused or lost and you have found a problem to fix. Of course, if you have a complicated website with many features on-screen, you may want to increase this a little (10-15 views) to capture different user behaviour patterns.
EyesDecide allows you to perform qualitative analysis by giving you realtime instant replays of individual viewers’ gaze, mouse and scroll behaviour. You see the screen as they saw it, and you can follow individual users as they explore a page. You can use the replay controls to check order of fixations and dwell time so you know the context for their behaviour.
You’ll notice that users behave differently. Some move methodically through the page, others jump around. Some people read carefully, others skim and come back if interested. Once you’ve captured a range of typical behaviour patterns you’ll usually have insight into the design features that are giving users headaches.
You can do a quick and dirty qualitative analysis with your colleagues or a small panel of users in minutes, and iterate your design rapidly.
A quantitative analysis produces hard numbers – what percentage of users saw feature “X”? The accuracy of these numbers depends on the sample size and the strength of the result. For example, in a relatively simple comparison of two product images you may find a strong preference for one design within 10-20 views: If you have a 90/10 split, you can probably stop testing. If your results come in at 50/50, you might want to go up to 50 views to find small differences.
A very complex page or viewing task may generate much more varied behaviour and uncertain results. For example, a given a cluttered shelf placement image you might be looking at time to find a particular product. It could take 30-50 views to be confident that one product design is easier to spot than another.
When comparing two or more variants you’ll usually need to divide your viewers between the different images. Create one study for each variant and compare results.
The other consideration when recruiting viewers to your study is the success rate. While EyesDecide does not charge you for unsuccessful viewings, you may incur a cost to recruit participants. In addition, you may not be able to control how many recruits actually attempt the study and it’s possible to overshoot.
The success rate with EyesDecide varies depending on three factors:
- The motivation and familiarity of your viewers
- Any control you have over the computer or location used
- Your ability to screen-out viewers who do not meet all the criteria
These factors will be affected by the method you use to recruit viewers.
If you set up a desk for user-testing in a good environment with bright lighting and bring people to the desk, you will have a high success rate (80-90%). Similarly, with colleagues who are familiar with EyesDecide and have a suitable working environment, you will get 80%. An environment free from distractions, a good computer and Chrome already installed removes many of the hurdles.
If you have your own panel of regular at-home users who are familiar with viewing studies on EyesDecide, again your success rate will be high (80%+). We have a similar rate of success our internal panel.
First-time users are occasionally a bit confused. User-testing labs who are able to set up a good system and screen-out users with glasses, get success rates of 70% or better. It can help to ask users to practice on a different image before viewing the real thing – this increases success rates by about 10-20%.
Recruiting first-time users at home is more difficult, primarily because it is difficult to screen for correct web-browser, non-mobile and other criteria, such as having a webcam and not wearing glasses. Of those who meet the viewing criteria, between 50% and 70% will record a successful viewing. Viewers who are not successful are usually sitting in a dark room, talking on the phone, or otherwise distracted and not participating correctly.
If you do not screen viewers, they are poorly motivated, or they are suspicious of your invitation, as many as 50% will simply decline to participate. Therefore, the recruitment strategy is very important. You may want to engage a reputable panel provider.
- For quick and dirty qualitative checks for design problems: 5-15 colleagues or ordinary real users, get results in minutes. 80-90% success rate.
- Looking for a specific design problem on a webpage: 10-20 views, should be 24 hrs.
- For a robust qualitative check on a complex webpage: 20-30 views
- For a simple quantitative question (e.g. total dwell time, time-to-look): 20-30 views
- For a complex quantitative analysis of multiple factors with high inter-viewer variability: 50 views per experimental condition*
* For example, if you have 2 advert image variants and you need to control to exclude positioning effects, you may need to test 4 screen locations; 2 variants x 4 locations = 8 conditions. 50 views x 8 conditions = 400 views.