• Non ci sono risultati.

Among the complete sample of registered sessions (109), we considered only 63 sessions.

In the analysis phase we encountered many tests that were not finished by the users, probably due to a lost of connection or to a choice of the user.

Interrupted sessions, sessions without visited pages, or sessions with not realistic visited pages were not used in the analysis stage. Of course this is only a supposition as OpenWebSurvey cannot track subjective data.

Conclusions

C h a p t e r X

Conclusions

In this dissertation we aimed to present an overview of web usability analysis approaches.

Our position is that different analysis approaches should be used according to the background of the researchers, to the development stage, to the medium targeted and to the available resources.

Researchers with different backgrounds would better choose different approaches. Most of the usability inquiry methods are more suitable for a psychological or anthropological background. Server log analysis, standard inspections and remote web usability testing require, instead, a background in computer science.

Different phases of the development would require different web usability approaches. Approaches like contextual inquiry or ethnographic study can be useful in different stages of the development. Other approaches, like activity recording or server log analysis are more suitable for a advanced stage of the project.

Today the web can be accessed using desktops, personal digital assistants, subnotebooks, mobile phones, to name a few. Approaches like activity recording are almost impossible to use for small-size media like mobile phones or personal digital assistants, that would require using approaches in the usability inspection group or like the diary method.

Accordingly to the available resources, different approaches should be used. To study the use of a web site (typically an Intranet) by a group of users contextual inquiries or ethnographic studies would be a very good choice, but if resources are limited the diary method could be enough.

Similarly, approaches requiring logging hardware for usability testing can

Conclusions

be replaced by approaches based on software logging, as proposed by the author.

Proxy-based remote web usability testing could be a valid aid to gather quantitative data about the usability of the site: remote usability testing is useful to identify the majority and more relevant usability issues with a large number of participants. Using OpenWebSurvey a test can be performed on a significant number of users, overcoming the trade-off benefit/cost proposed by Nielsen.

The test performed on shopping.msn.it using OpenWebSurvey allowed to strengthen the advantages of remote web usability testing. In fact remote web usability testing is cost effective, because the time spent on usability evaluation was reduced by the fact that it was not necessary to manually log the users or record their experience. As suggested by Ivory and Hearst (2001), using this approach it was also easy to compare alternative designs of the same web site (the design was different using Internet Explorer and other browsers), that commonly it is not an easy task, since it requires time, costs and resources.

The main advantage of the proxy approach with respect to other remote web usability testing approaches is that it was it is not necessary to ask to the web site administrator to install the application in the site under evaluation neither to ask the users to install a particular client application on their computer.

Of course, this approach lacks in terms of knowledge of user feelings and expressions: it is not possible to communicate in real time with the user to understand better what she is doing and it is not possible to record her visual and vocal expressions and to have subjective data.

The opinion of the author is that remote usability testing, like other automating usability evaluation methods, could not be a substitute of traditional evaluation techniques but can be a useful addition to them.

Conclusions

Remote usability testing is a good option to screen a list of web usability issues and to further investigate them with a traditional usability analysis in a laboratory or to screen a sample of users belonging to different target groups to constitute a focus group or to perform a discount test.

References

References

Agner, L. C., Morales, A. (2003). Corporate Web Sites and Human-Computer Interaction, a Case Study. Proceedings of HCI2003, Heraklion, Greece.

Australian net Guide (2001) Retrieved on August 1, 2003 from http://www.netguide.com.au/.

Baecker, R., Buxton, W. (1987). Readings in Human-Computer Interaction, Los Altos, CA, Morgan Kaufmann.

Balbo, S. (1995). Automatic evaluation of user interface usability: Dream or reality. Proceedings of the Queensland Computer-Human Interaction Symposium, Queensland, Australia.

Baravalle, A., Lanfranchi, V. (2003). Remote web usability testing, Behavior Research Methods, Instruments, & Computers.

Blackmon, M.H., Polson, P.G., Kitajima, M. & Lewis, C. (2002). Cognitive Walkthrough for the Web. Proceedings of CHI2002, ACM Press, Minneapolis, Minnesota, USA, pp. 463-470.

Bonazzi, G. (1998). Storia del pensiero organizzativo, Milano, Italy, pp. 29-30.

Booth, Paul (1989). Introduction to Human Computer Interaction, Hove, UK.

Chevalier, A., Ivory, M. Y. (2003) Can novice designers apply criteria and recomendations to make web sites easier to read? Proceedings of HCI2003, Heraklion, Greece.

Daskalaky, S., Koutry, M. (2003). Improving web site usability throught clustering approach. Proceedings of HCI2003, Heraklion, Greece.

Dey, A. (2003) Introduction to usability. Retrieved on August 1, 2003 from http://www.its.monash.edu.au/web/slideshows/humanfactors.ppt

References

Denning, P. J., et al. (1988). Report on the ACM Task Force on the Core of Computer Science, New York, ACM Press (Order No. 201880).

Dix, A., Finlay, J., Abowd, G., Beale, R. (1998). Human-Computer Interaction (second ed.). Prentice Hall, Upper Saddle River, NJ.

Ebling, M., John, B. (2000) On the Contributions of Different Empirical Data. Proceedings of DISS 2000, New York City, New York, USA, pp.

289-296

Englebart, D. C. (1963). A conceptual framework for the augmentation of man's intellect, in Howerton and Weeks (Eds.) Vistas in Information Handling, Washington, DC, Spartan Books, Vol. 1., pp. 1 -29.

Gribble, C. (2003). History of the Web Beginning at CERN. Downloaded from http://www.hitmill.com/internet/web_history.asp on August 1, 2003.

Hartson, H. P., Castillo, J. C. (1998) Remote evaluation for post-deployment usability improvement. Proceedings of AVI '98 (Advanced Visual Interfaces). L'Aquila, Italy, ACM Press, pp. 22-29.

Hartson, H. P., Castillo, J. C., Kelso, J., & Neale, W.C. (1996). Remote evaluation: The network as an extension of the usability laboratory.

Proceedings of CHI96, New York, NY, pp. 228-235.

Helander, M. et al. (1993), Handbook of Human-Computer Interaction, Elsevier.

Heer, J., Hong, J., Waterson, S., Landay, J. (2001). WebQuilt: A Proxy-based Approach to Remote Web Usability Testing, ACM Transactions on Information Systems 19(3), pp. 263-285. Retrieved on August 1, 2003 from http://www.cs.berkeley.edu/~jasonh/publications/acmTOIS -webquilt.pdf.

Hughes, K., Jenkins, S., Wright, G. (1999). triple-s XML: A standard within a standard, Proceedings of Association of Survey Computing International Conference, Edinburgh UK, pp 421-433. Retrieved on August 1, 2003 from http://www.triple -s.org/sssasc99.htm.

References

Human Factors International (n.d.). A Bit of History – How the Problem of Accessibility Arose. Retrieved on August 1, 2003 from http://www.humanfactors.com/downloads/accessibility.asp

Ivory, M., Hearst, M. (2001). State of the Art in Automating Usability Evaluation of User Interfaces. ACM Computing Surveys 33(4), pp. 1-47.

IBM Corporation (1993). IBM CUA - Common User Access Guidelines, QUE.

Kangas, E., Sinisammal, J., Paihonen, S. (2003). Diary as a Usability Testing Method, Proceedings of HCI2003, Heraklion, Greece.

Kantowitz, B., Sorkin, R., (1983). Human Factors: Understanding People-System Relationships. John Wiley & Sons.

Karat, C. (1990). Cost benefit analysis of usability engineering techniques.

Proceedings of the Human Factors Society, Orlando, FL, USA.

Kay, A.,Goldberg, A. (1977). Personal dynamic media, IEEE Computer, 10 (3), pp. 31-42.

Krug, S. (2000) Don't Make Me Think!: A Common Sense Approach to Web Usability. New Riders.

Landauer, T. K. (1996) The trouble with computers: Usefulness, usability, and productivity. MIT Press, Cambridge, MA

Lanfranchi, V (2003) A Multimodal Approach to Ubiquitous Information Management. Not yet published.

Lewis et al. (1990) Testing a walkthrough methodology for theory-based design of walk-up-and-use-interface. Proceedings of HCI '90, Seattle, WA, USA, pp. 235-242.

Lewis, C., Wharton, C. (1993) Cognitive Walkthroughs. In: Helander, M. et al. (1993), Handbook of Human-Computer Interaction, Elsevier

Licklider, J. (1960). Man-computer symbiosis, IRE Transactions on Human Factors in Electronics, HFE-1 (1), pp. 4-11.

References

Microsoft Corporation (1995). The Windows Interface Guidelines for Software Design: An Application Design Guide, Microsoft press.

Myers, B., Hudson, S. E., Pausch, R. (1999). Past, Present and Future of User Interface Software Tools, ACM Transactions on Computer-Human Interaction, 7(1), pp. 3-28. Downloaded from

http://perez.cs.vt.edu/cs5984/assignments/p3-myers.pdf.

Newell, A., Perlis, A., Simon, H. (1967). What is computer science?

Science, 157, pp. 1373-1374.

Nielsen, J. (1998). What is "Usability". Downloaded from http://www.zdnet.com/devhead/stories/articles/0,4413,2137671,00.html on August 1, 2003

Nielsen, J. (2000). Why You Only Need to Test With 5 Users.

Retrieved on August 1, 2003 from

http://www.useit.com/alertbox/20030602.html

Nielsen, J.b (2000). Beyond Accessibility: Treating Users with Disabilities

as People. Retrieved on August 1, 2003 from

http://www.useit.com/alertbox/20011111.html

Nielsen, J. (1993). Usability Engineering. New York, NY: Academic Press, Inc.

Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proceedings of ACM CHI'90 Conference,Seattle, WA, USA , pp. 249-256.

Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L.

(Eds.), Usability Inspection Methods. John Wiley & Sons, New York, NY.

Nielsen, J. (1994). Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier. Downloaded from http://www.useit.com/papers/guerrilla_hci.html on August 1, 2003

Nielsen, J. (2003). Usability for $200. Downloaded from http://www.useit.com/alertbox/20030602.html on August 1, 2003

References

Nielsen, J. (2001). Beyond Accessibility: Treating Users with Disabilities

as People. Retrieved on August 1, 2003 from

http://www.useit.com/alertbox/20011111.html

Nielsen, J. (1997), How users read on the web. Retrieved on August 1, 2003 from http://www.useit.com/alertbox/9710a.html

Nielsen, J., Tahir, M. (2001). Homepage Usability: 50 Websites Deconstructed, New Riders.

Norman, D. (1998). The design of everyday things, MIT Press.

Norman, D. (2003). Speak at Interaction Design Institute, Ivrea.

Paternò, F., Paganelli, L. & Santoro, C. (2001). Models, Tools and Transformations for Design and Evaluation of Interactive Applications.

Proceedings of PC-HCI 2001, Patras, Greece, pp.23-28.

Perfetti, C. (2003) Usability Testing Best Practices: An Interview with Rolf Molich. Retrieved on August 1, 2003 from http://tc.eserver.org/19750.html.

Perfetti, C., Landesman, L. (n.a.) Eight is not enough. Retrieved on Aug ust 1, 2003 from http://www.uie.com/Articles/eight_is_not_enough.htm

Perlman, G. (1988). Software Tools for User Interface Development. . In:

Helander, M. et al. (1993), Handbook of Human-Computer Interaction, Elsevier.

Scholtz , J. (2001). Adaptation of Traditional Usability Testing Methods for Remote Testing, HICSS 2001. Retrieved on January 20, 2004 from http://www.itl.nist.gov/iad/IADpapers/hicss2001-final.pdf

Spool, J., Schroeder, W. (2001). Testing Web Sites: Five Users Is Nowhere Near Enough. Proceedings of CHI '01. 2001, Seattle, WA, USA, pp. 285-286. Retrieved on August 1, 2003 from http://www.winwriters.com/download/chi01_spool.pdf

Stallman, R. (n.a.) The free software definition. Retrieved on August 1, 2003 from http://www.gnu.org/philosophy/free-sw.html

References

Ramazzini, B. De morbis artificum diatribe.

Tamler, H., M. (2001) High-Tech vs. High-Touch: Some Limits of Automation in Diagnostic Usability Testing. Retrieved on August 1, 2003 from http://www.htamler.com/papers/techtouch/.

Tullis, T. S., Fleischman, S., McNulty, M., Cianchette, C., Bergel, M.

(2002) An Empirical Comparison of Lab and Remote Usability Testing of Web Sites Proceedings of Usability Professionals Association Conference,

Orlando, FL. Retrieved on August 1, 2003 from

http://hci.stanford.edu/cs377/nardi-schiano/AW.Tullis.pdf.

Vanderheiden, G. (2003). Universal Design of Consumer Products:

Current Industry Practice and Perceptions. Proceedings of XIVth Triennial Congress of the International Ergonomics Association and 44th Annual Meeting of the Human Factors and Ergonomics Society, 6, pp. 19-22 Retrieved on August 1, 2003 from

http://trace.wisc.edu/docs/ud_consumer_products_hfes2000/index.htm.

Virzi, R. (1992) Refining the test phase of usability evaluation:

how many subjects is enough?, Human Factors, Santa Monica, CA, USA, 34(4), pp.457-468.

Waddell, C., Henry, S., Swierenga, S., Urban, M., Burks, M., Bohman, P.

(2003) Constructing Accessible Websites, APress, chap. I. Retrieved on January 20, 2004 from http://www.glasshaus.com/samplechapters/1000/default.asp

Waterson, S., Matthews, T., Landay, J.A. (2002) In the Lab and Out in the Wild: Remote Web Usability Testing for Mobile Devices. Proceedings of ACM CHI 2002 Conference on Human Factors in Computing Systems, Minneapolis, MN, USA, pp. 296-297.