We developed ‘mass user testing’ in response to the real world needs of commercial clients and to combat the deficiencies inherent in the most widely used traditional usability testing methods (we have actually been doing this for about 4 years but formalised it last year).
The key to mass user testing is using large numbers of people rapidly and cost effectively – this is achieved through recruiting people ‘off street’ with the lure of some cash (or other incentive – we are quite creative in this regard) for about 15 minutes of their time.
Traditional testing methods have test sessions that last about an hour (sometimes more), and typically 5 or 6 pre-recruited (and therefore expensive) people are tested per day.
Performed well this approach delivers good qualitative results – for instance it will identify usability issues in a website process and provide recommendations for improvements – but it fails to deliver valid metrics (unless costs go sky high), and is often bloated in terms of the value that a test on any particular person delivers (e.g. the main value is often in the first 15 minutes or so, or clients may want to focus just on key discrete design changes, such as homepage or landing page modifications).
Bunnyfoot recently opened a new office specifically to deliver mass user testing. Reading, in the Thames Valley, was chosen specifically to take advantage of demographics representative of most clients’ needs (e.g. high A, B, C1, C2 without London bias) and a high footfall for recruiting test participants ‘off street’.
The testing was developed originally to meet the demands of Bunnyfoot’s clients for quantitative results and insights from eyetracking testing of print adverts, packaging, e-mail and direct mail. This worked so well that it was opened out to Bunnyfoot’s usability clients too, where it has proved highly effective at supporting rapid development cycles and part of a user-centred-design process. Qualitative and quantitative results are returned within a day and the continual measurement allows design effectiveness to be tracked throughout the process, and/or competing designs to be objectively judged against each other.
To further refine the service Bunnyfoot run an omnibus testing service. Each and every week a minimum of 120 people are tested using websites and other media (print ads, TV ads etc) from a variety of industry sectors.
Clients can pay (either one-off or via a subscription service) to have their materials included in the ongoing testing. This proves particularly cost effective if just the effects of a single page (such as a home page or landing page) or small modifications to existing pages need to be tested – the tests for these often only take a minute so it would be highly wasteful to pay for a full blown longer test.
Mass user testing is really useful but is not the golden bullet to all types of customer testing – it is particularly suited to B2C offerings with a wide customer base in terms of demographics (strict customer profiles can often be recruited off-street but it takes more manpower, costs more and takes longer – slightly defeating the whole point).
It is excellent as an extra tool in the whole spectrum of user-centred-design testing activities to be deployed when most appropriate and often in conjunction simultaneously with more traditional testing methodologies.
I bet without too much thought you will have a use for it.