• Non ci sono risultati.

Remote web usability testing has been adopted in several usability test in the past years, mainly because of its cost-effectiveness and because it offered the possibility to test a great sample of users.

Remote Web Usability Testing

In 1998 Hartson and Castillo developed and tested a cost-effective remote usability evaluation method based on the reports of critical errors encountered by the users.

The idea of that method was that critical incident data about problems the users encounter during task performance can help to identify usability problems on a web site.

The study was conducted on 24 users with the support of a software installed on their computer, a scan converter and a videotape to record the user expressions.

Then four trained subject were employed to analyze selected critical incident reports sent by user subjects and to convert them into usability problem descriptions.

The results showed that the users with only brief training were able to identify critical incident data on their own and to report and rate them

Scholtz (2001) studied how to modify traditional usability testing tools to remotely test users and performed a virtual design meeting using WebVIP and VISVIP (a logging and a visualization tool) on two web sites.

The idea was to collect design requirements remotely in the form of tasks or scenarios. Scholtz concluded that remote usability testing is not a substitute for traditional testing but can add valuable data, such as identify areas of web sites that are problematic, difficult user tasks, or eve n user populations that should be considered for detailed usability testing.

Tullis et al. (2002) presented a comparison between traditional laboratory-based usability testing and remote web laboratory-based usability testing of web sites.

The test was structured using two groups (one tested in a lab and one remotely) and two case studies, an employee benefit web site and a financial web site.

Remote Web Usability Testing

The goal of the study was to evaluate the effeciveness of the remote testing approach.

The remote test was conducted on users in their real work environment without real-time observation.

The participants were recluted by e-mail: on 38 participants, 29 completed successfully the test.

The application used to perform the test was a browser with a main window, used to present the site being tested and a small window across the top of the screen used to present tasks to the user and capture her behaviour.

The lab test was conducted individually on eight users, with an average time for users of 1.5 hours.

The task were presented to the user on a paper using the think-aloud protocol.

There was a moderator and a data logger who registered the time completion per task.

The first test was conducted on a employee benefit web site, preparing seventeen tasks to be performed by the users. After the tasks, the user was asked to provide subjective feedback about the Web site.

The task completion data showed that the laboratory user completed 60%

of the seventeen tasks, while the remote users completed 64%. The lab users spent an average of 147 seconds per task while the remote users spent 164 seconds.

For what regards finding usability issues, in both the groups the users that identified more issues were experienced usability professionals.

In the laboratory test 26 usability issues were identified, while in the remote test only 16.

Remote Web Usability Testing

The results showed that the behaviour of the users was similar in both groups and they encountered similar problems

The remote users provided rich textual comment to every tasks, compensating the lack of visual interaction with the test administrator.

The main advantage of performing the remote test was the time, because the remote users performed the test when they wanted, without needing the presence of an administrator, the costs and the possibility to test a bigger aand then more heterogeneous group of persons.

Of course the direct observation in a lab offered the possibility to capture particular usability issues ansd subjective feelings that cannot be gathered with a remote test.

Waterson, Landay and Matthews (2002) performed a usability study on wireless Internet-enabled personal digital assistants web site using WebQuilt. They performed the same test in a laboratory to analyse the differences.

The test was performed on ten users, half in a laboratory and half remotely connected.

In the laboratory a moderator was present to record comments and answer to the questions.

The results showed that the laboratory users found out 18 usability issues, while the remote users only 7, but 5 of 7 of the usability issues related to the design of the site were discovered by remote users.

The main difficulties in the remote test were to understand user motivations or problems when they abandoned the test.

The results showed that remote testing was useful to understand the design issues but was poor in understanding usability issues with the device itself.

OpenWebSurvey

C h a p t e r V I I I

OpenWebSurvey

This chapter analyses the technical implementation details of OpenWebSurvey, a prototypal web application for remote usability testing, able to record users behaviour while they surf the Internet, guided by a questionnaire. OpenWebSurvey works without the need of installing any software or hardware components, either in the client computer or in the web site server.

Although the user does not perceive any difference in the way of surfing the site under investigation, OpenWebSurvey monitors web navigation storing data on visited pages (load time and some client side actions), about the entire session (visited pages, total visit time, page visit time, general information about the user system) and about answers to the questionnaire.

Adopting Balbo and Ivory and Hearst taxonomies OpenWebSurvey can be definined a software for automatic capture of information about the user and the site, that requires minimal effort if used without a questionnaire or a formal use if used in conjunction with a questionnaire.

This software has been developed by Andres Baravalle and Vitaveska Lanfranchi, Ph.D. students at Turin University, Department of Computer Science. OpenWebSurvey is a free software, ”free” as in ”free speech” not as in ‘’free beer’" (Richard Stallman, n.a.), available under the terms of Gnu Public License (GPL). As an alternative, it can be used directly on OpenWebSurvey main server (http://openwebsurvey.di.unito.it), on request.