• Non ci sono risultati.

Design and comparison of navigation and interaction techniques for a virtual reality experience in the cultural heritage domain

N/A
N/A
Protected

Academic year: 2021

Condividi "Design and comparison of navigation and interaction techniques for a virtual reality experience in the cultural heritage domain"

Copied!
92
0
0

Testo completo

(1)

Master’s Degree Programme

in Computer Science

“D.M. 270/2004”

Final Thesis

Design and comparison of navigation

and interaction techniques for a virtual

reality experience in the cultural

heritage domain

Supervisor

Prof. Fabio Pittarello

Graduand

Leonardo Zuffellato

Matriculation Number 842804

Academic Year

(2)

"Is this the real life?

Is this just fantasy?"

(3)

Contents

1 Introduction 9 2 Related Works 10 2.1 Virtual Reality . . . 10 2.1.1 Brief History . . . 10 2.1.2 Immersive VR . . . 12

2.1.3 Navigation & Interaction in VR . . . 12

2.2 Tools and Devices . . . 13

2.2.1 HTC Vive . . . 13

2.2.2 Leap Motion Controller . . . 13

2.2.3 SteamVR . . . 14 2.2.4 Unity . . . 14 2.2.5 Hover UI Kit . . . 15 2.3 User Experience . . . 16 2.3.1 Embodiment . . . 16 2.3.2 Presence . . . 16 2.3.3 Engagement . . . 16 2.3.4 Immersion . . . 17 2.3.5 Perceived Workload . . . 17 3 Project Planning 18 3.1 Previous Work . . . 18 3.1.1 Technical Specification . . . 18 3.1.2 Features . . . 18 3.1.3 Goals . . . 20 3.1.4 Study . . . 20 3.1.5 Results . . . 20 3.2 Design Choices . . . 22 3.2.1 Requisites . . . 22

3.2.2 Design of the Application . . . 22

3.2.3 Application Functionalities . . . 23 3.2.4 Navigation . . . 24 3.2.5 Teleportation . . . 25 3.2.6 Scenery Selection . . . 26 4 Implementation 28 4.1 Resources . . . 28 4.1.1 Hardware . . . 28 4.1.2 Software . . . 28

(4)

4.1.3 Assets . . . 29 4.2 Development . . . 29 4.3 Platform . . . 29 4.3.1 Setup . . . 30 4.3.2 Navigation . . . 31 4.3.3 Teleportation . . . 33 4.3.4 Scenery Selection . . . 35 5 Experimental Design 37 5.1 Observed Variables . . . 37 5.1.1 Independent Variables . . . 37 5.1.2 Dependent Variables . . . 37 5.2 Comparison . . . 38 5.3 Permutations . . . 39

5.4 Guiding the Users . . . 39

5.5 Location and Setup of the Study . . . 39

5.6 Description of the experiences . . . 41

5.6.1 Experience 1: Gesture Based Interaction, without Positional Tracking . . . 42

5.6.2 Experience 2: Controller Based Interaction, with Positional Tracking . . . 44

5.6.3 Experience 3: Gesture Based Interaction, with Positional Tracking . . . 46

5.7 Questionnaire Design . . . 49

5.7.1 Initial Questionnaire . . . 49

5.7.2 Questionnaires related to the Experiences . . . 49

6 Analysis 53 6.1 Results . . . 53

6.1.1 Initial Questionnaire . . . 53

6.1.2 Questionnaires related to the Experiences . . . 55

6.1.3 Final Questions . . . 61

6.2 Discussion . . . 62

7 Conclusion 65 8 Bibliography 67 9 Appendix 69 9.1 Appendix A: Tutor’s script . . . 69

9.2 Appendix B: Questionnaires . . . 71

9.3 Appendix C: Task . . . 80

9.4 Appendix D: Relevant Code . . . 82

9.4.1 PreviousMovementSystem.cs . . . 82

(5)
(6)

List of Figures

1 1939 View-Master . . . 10

2 Weinbaum’s Pygmalion Spectacles . . . 11

3 EyePhone and DataGlove . . . 11

4 Sega VR Headset . . . 11

5 HTC Vive . . . 13

6 Leap Motion Controller . . . 14

7 Unity . . . 14

8 Hover UI Kit . . . 15

9 PlayVR: moving through the theater. . . 19

10 PlayVR: selecting the sceneries. . . 19

11 PlayVR: box-plots results for engagement . . . 21

12 PlayVR: means and median scores for embodiment in the different areas of the body . . . 21

13 Recap of the three experiences . . . 23

14 The gesture of pointing with Leap Motion’s Orion . . . 24

15 Leap Motion’s "rigged-hands" . . . 25

16 The HoverCast menu on virtual hands vs controllers . . . 26

17 The HoverPanel menu . . . 26

18 The structure of the HoverCast and HoverPanel menus . . . 27

19 MSI VR One and HTC Vive . . . 28

20 Attaching the Leap Motion Controller to the HTC Vive . . . 29

21 The interface of Unity 3D (source: docs.unity3d.com) . . . 29

22 The 3D model of "La Fenice" . . . 30

23 Unity’s "Scene" panel showing the view from inside the theater, and the transform com-ponent of the selected 3D models . . . 31

24 Navigation with the HTC Vive controllers . . . 32

25 The layer mask in the stalls area ("Sala") . . . 33

26 The structure of the HoverCast GameObject . . . 34

27 Subjective view of the teleportation menu . . . 34

28 Subjective view of the teleportation menu with the HTC Vive controllers . . . 35

29 The structure of the HoverPanel GameObject . . . 35

30 Subjective view of RowAPanel . . . 36

31 Subjective view of RowMadameButterfly . . . 36

32 Subjective view of the curtain closing and then opening again, revealing the scenery . . . 36

33 The New Cambridge Institute . . . 40

34 Room layout . . . 40

35 Subjective view of the navigation system, Experience 1 . . . 42

(7)

37 Subjective view of the HoverPanel menu, Experience 1 . . . 43

38 Subjective view of the stage, Experience 1 . . . 44

39 Subjective view of first room, Experience 2 . . . 44

40 Subjective view of the navigation system, Experience 2 . . . 45

41 Subjective view of the HoverCast menu, Experience 2 . . . 45

42 Subjective view of the HoverPanel menu, Experience 2 . . . 46

43 Subjective view of the navigation system, Experience 3 . . . 47

44 Subjective view of the HoverCast menu, Experience 3 . . . 47

45 Subjective view from the stage, Experience 3 . . . 48

46 Subjective view of the HoverPanel menu, Experience 3 . . . 48

47 Example of a Tukey box plot . . . 53

48 Venn diagrams showing the logical relations among the number of users that had previous experiences with the listed devices . . . 55

49 Results for the initial questions - box plot based on a 5-points Likert scale . . . 55

50 Comparison of engagement for the three experiences (navigation) - box plot based on a 5-points Likert scale . . . 56

51 Comparison of engagement on the three experiences (teleportation) - box plot based on a 5-points Likert scale . . . 56

52 Comparison of engagement on the three experiences (scenery selection) - box plot based on a 5-points Likert scale . . . 57

53 Comparison of embodiment on the three experiences - box plot based on a 5-points Likert scale . . . 57

54 Comparison of embodiment for the three experiences on different parts of the body - mean and median scores based on a 5-points Likert scale . . . 58

55 Comparison of embodiment for the three experiences on different parts of the body - box plot based on a 5-points Likert scale . . . 59

56 Comparison of presence on the three experiences - box plot based on a 5-points Likert scale 59 57 Comparison of immersion on the three experiences - box plot based on a 5-points Likert scale . . . 60

58 Comparison of perceived workload on the three experiences - box plot based on a 5-points Likert scale . . . 61

(8)

List of Tables

1 Temporal sequence of the experiences . . . 39

2 Recap of the structure of the experiences . . . 41

3 Questions about engagement . . . 50

4 Questions about embodiment . . . 50

5 Questions about presence . . . 51

6 Questions about immersion . . . 51

7 Questions about perceived workload . . . 52

8 Total number of virtual reality devices tried by the users . . . 54

(9)

Abstract

The thesis is focused on the design and the comparison of a number of interaction techniques for VR, applied to a scenario characterized by a number of tasks that are representative of a 3D immersive experience, like the navigation and the interaction with the scene’s objects.

The interaction techniques will be applied to an experience designed for the cultural heritage domain and that will involve the exploration of a theater and the selection of sceneries.

The interaction techniques will be evaluated with a group of users in relation to different parameters, including embodiment, presence, engagement, immersion, perceived workload.

(10)

1

Introduction

The number of virtual reality applications is rapidly increasing with the improvement of the available devices. This technology has proven to be a breakthrough in many fields: entertainment, architecture, medicine, psychology, education.

This thesis focuses on the development of an immersive virtual reality experience that gives the opportu-nity of exploring an eighteenth century theater and the sceneries of a number of plays that were staged there.

The related experiment, through the evaluation of multiple users, tries to understand what is the best interaction and navigation system of such an application by analyzing the most effective parameters of user experience offered by the literature.

For the development of the project, the most recent technologies for virtual reality and gesture detection were used.

The thesis will cover the related works, then describe the design choices that were made and their implementation. The next part of this composition is about the experiment with the group of users, its results and their discussion.

Another complementary project, "Learning how to perform on stage: design and evaluation of an educa-tional virtual reality application", has been developed by Veronica Pagini and will expand this work by creating a guidance system that helps actors to learn a part by performing a play in virtual reality.

(11)

2

Related Works

This chapter will be focused on the scientific research and technology on which this thesis is based, starting from an overview about virtual reality, the related tools and devices and following with the examination of the parameters of user experience.

2.1

Virtual Reality

Virtual Reality (VR) is a term that Fuchs and Bishop1 [12] defined in 1992 as "Real-time interactive

graphics with three-dimensional models, when combined with a display technology that gives the user immersion in the model world and direct manipulation".

2.1.1 Brief History

The first person to use this term was Jaron Lanier2 in 1985, when he founded the visual programming lab (VPL), but the idea was born before that.[22]

If we think about VR as a means to create the illusion that we are present somewhere we are not, in the nineteenth century there were some painters that created panoramic paintings with the idea to fill the viewer’s entire field of vision.

When photography became more popular, in 1939 the “View-Master” stereoscope was invented by William Gruber3: a stereoscopic visor showing each eye two dimensional photos to give the user an illusion of

depth and immersion. (see Figure 1).

Figure 1: 1939 View-Master (source: retrowst.com)

Again, in 1930 Stanley Weinbaum4 devised a pair of goggles (see Figure 2) to make the user experience

a fictional world, stimulating not only the sense of vision but also smell, taste and touch.

1Gary Bishop and Henry Fuchs, American professors of Computer Science at the University of North Carolina at Chapel Hill (UNC).

2Jaron Zepel Lanier(May 3, 1960) is an American computer philosophy writer, computer scientist, visual artist, and composer of classical music.

3Wilhelm Gruber, German organ maker, inventor of the View-Master 4Stanley G. Weinbaum (Louisville, 1902-1934), American fantasy writer

(12)

Figure 2: Weinbaum’s Pygmalion Spectacles (source: gotuchvr.com)

In the 1950s, Morton Heilig5 developed the Sensorama, an arcade-style theater cabinet featuring stereo speakers and a 3D display where he fully immersed the viewer, which was shown short films.

Starting from the ’60s, the first VR Head Mounted Displays (HMD) were patented, and in 1961 the creators started to include motion tracking systems, that slowly evolved to Lanier’s gear (1987), the EyePhone HMD and the Dataglove, that was used to capture hand gestures (Figure 3) .

Figure 3: EyePhone and DataGlove (source: flashbak.com)

In the 1990s, major companies like SEGA6 and Nintendo7announced their VR consoles (Figure 4) .

Figure 4: Sega VR Headset (source: segaretro.com)

5Morton Leonard Heulig (1926 - 1997) American director, photographer and cameraman 6SEGA Corporation, Japanese videogame society founded in 1951

(13)

From that time, the market of HMDs has rapidly expanded and evolved, coming to the devices we see today.

2.1.2 Immersive VR

The authors of the Paper “Enhancing Our Life with Virtual Reality”[20] describe the term Immersive Virtual Reality as “a system that that delivers the ability to perceive through sensorimotor contingencies”. They classify a VR system “more immersive than another” if it can simulate all the perception afforded by the other, and not vice versa. A definition of immersion as a component of user experience will be given in the following sections.

2.1.3 Navigation & Interaction in VR

The navigation system of virtual reality is the way the users can move in the environment. Papers like “Travel In Immersive Virtual Environments”[11] explain the most common ways to achieve navigation in Virtual Reality.

The main existing techniques are:

• “gaze directed steering”, the users move towards the position that they are looking at;

• “pointing/gesture steering”, the users take advantage of hand gestures (that are tracked by a device) as a way of movement;

• “using devices”, the users move by pressing buttons on a device (ex: mouse, keyboard, joystick), rarely used in VR;

• “vehicles”, where the users are actually still but the environment moves around them;

• “teleportation”, where the users teleport to a different position in the virtual environment;

• “real-life walking”, the users walk with their legs: an innovative way of tracking their position in an actual room. This system will be explained in detail in the next section.

As for other types of interaction, the users generally interact with virtual object or menus in the VR environment. Some examples of those methods are similar to the means for navigation; users can interact by using:

• “gaze”, where a laser appears in the direction the users are looking at, and by looking for a number of seconds at the same object it can be selected;

• “gestures”, hand movements that are tracked by a device and mapped to different actions;

• “devices”, where the users run through the menus using external devices (ex: mouse, keyboard, joystick, controllers)”.

(14)

As mentioned, one of the question posed by this experiment is to understand a good way to achieve navigation and other types of interaction in Virtual Reality.

2.2

Tools and Devices

This section gives a brief explanation and description of the tools and devices used for this project.

2.2.1 HTC Vive

The HTC Vive [7] is a headset for virtual reality created by HTC and Valve corporation, available from April 2016. It uses tracking technology for moving in a 3D space. This paradigm, where the users are able walk in a play area with their motion tracked in the VR environment, is called “Room-scale Navigation”.

Figure 5: HTC Vive (source: vive.com)

The kit features a headset, with a Dual AMOLED display with resolution 1080 x 1200 pixels per eye and a front facing camera to observe the surroundings without removing the device. Two wireless controllers are supplied, with eight different buttons each, including a trackpad. The controllers and the headset are tracked by two Base Stations, boxes emitting infrared pulses that are picked up by the devices.

To start the experience, the user will have to set up the playable area by using the controllers to delimit the available space during the interaction. For safety, when the headset detects that the user is close to the limits of the play area, a visible reticle is superimposed to the users’ view.

2.2.2 Leap Motion Controller

The Leap Motion Controller [3] is a device created by the Leap Motion Inc. company, compatible with many headsets and desktop computers, that has the capability to track the user’s hands. It features two

(15)

monochromatic cameras and three infrared LEDs. The core software, named Orion, allows developer to use its software development kit to create VR applications that detect the hands and the gestures of the users.

Figure 6: Leap Motion Controller (source: leapmotion.com)

2.2.3 SteamVR

SteamVR [4] is a project developed by Valve, that includes a software to manage the interaction between the HTC Vive and the computer. It is part of Valve’s Steam, a platform used to distribute digital games via the Internet.

2.2.4 Unity

Unity [5] is a development platform and engine for the creation of 3D video games and interactive content. It’s compatible with Microsoft Windows and macOS, and allows the developers to use the C#

Figure 7: Unity

and JavaScript languages as well as a graphical interface to set up their creations.

(16)

supports multiple platforms and headsets and can be used for free.

Many developers choose this platform: thirty-four percent of the top free mobile games are made with Unity8.

2.2.5 Hover UI Kit

The Hover UI Kit [2] is a Unity package that interfaces with the Leap Motion and lets the developer create customizable and interactable interfaces.

Figure 8: Hover UI Kit (source: github.com/aestheticinteractive/Hover-UI-Kit )

Specifically, it allows the creation of menus and panels that can be visualized by the users’ hands or in front of their point of view. This interfaces can be interacted with a simple selection by touching it with a specific finger.

It also works in a similar way with the HTC Vive’s controllers, substituting the virtual hands with the two devices.

(17)

2.3

User Experience

According to W. Albert and T. Tullis [8] "user experience involves a user that interacts with a product, a system or an interface, and his experience is of interest, observable and measurable."

In this project, different factors connected to an Immersive Virtual Reality experience are evaluated: Embodiment, Presence, Immersion, Engagement, Flow and Perceived Workload.

2.3.1 Embodiment

Embodiment is defined by Kilteni, Groten and Slater [14] as the “Subjective experience of using and ‘having’ a body”.

More in detail, the definition is divided by the authors of this paper in three sub-components: Self-location, Agency and Body Ownership.

The complete definition becomes “one experiences Sense of Embodiment toward a body B, if one feels self-located inside B at least in a minimal intensity, if one feels to be an agent of B at least in a minimal intensity, if one feels B as one’s own body at least in a minimal intensity, if and only if one experiences at least one of the three senses at least in a minimal intensity. One experiences full Sense of Embodiment toward a body B, if one experiences all of the three senses at the maximum intensity”.

2.3.2 Presence

Witmer and Singer, the authors of “Measuring Presence in Virtual Environments: A Presence Questionnaire”[23] define presence as as the subjective experience of being in one place or environment, even when one is physically situated in another.

Some major factors are also defined:

• control factors: the more control an user has over the environment, the greater the experience of presence;

• sensory factors: how many and how much of the user’s senses are stimulated during the experience;

• distraction factors: how much the user is isolated from the real world;

• realism factors: how realistic and consistent with the real world the experience is.

2.3.3 Engagement

The main definition of engagement is based on the analytical denotation given by O’Brien [17], that underlines six different factors. These factors are:

• Cognitive Involvement or Focused Attention: the amount of concentration and mental activity perceived by the users during the experience;

(18)

• Ease of Use or Perceived Usability: the level of usability of the components of the experience;

• Aesthetics Quality: the visual beauty of computer-based environments;

• Will to Try Again: the desire of the users to try again the experience;

• Novelty of Experience: the sense of a new and surprising experience;

• Emotional involvement of Affect : the emotional investment made by the users;

2.3.4 Immersion

Witmer and Singer [23] also define immersion as “a psychological state characterized by perceiving oneself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli and experiences”.

2.3.5 Perceived Workload

The perceived workload can be evaluated using NASA-TLX (Task Load Index)[13], a test used to assess the workload of a specific task. It includes the parameters defined as follows:

• Mental Demand : "How much mental and perceptual activity was required (e.g. thinking, remem-bering, looking, searching, etc.)? Was the task easy or demanding, simple or complex, exacting or forgiving?"

• Physical Demand : "How much physical activity was required (e.g. pushing, pulling, turning, con-trolling, activating, etc.)? Was the task easy or demanding, slack or strenuous, restful or laborious?"

• Temporal Demand : "How much time pressure did the you feel due to the rate or pace at which the tasks or task elements occurred? Was the pace slow and leisurely or rapid and frantic?"

• Overall Performance: "How successful were you in accomplishing the goals of the task set by the experimenter (or yourself)? How satisfied were you with your performance in accomplishing these goals?"

• Frustration Level : "How insecure, discouraged, irritated, stressed, and annoyed versus gratified, content, relaxed, and complacent did you feel during the task?"

• Effort : "How hard did the you have to work (mentally and physically) to accomplish your level of performance?"

(19)

3

Project Planning

This chapter will explain the design of this project, starting with the previous work and continuing with the design choices.

3.1

Previous Work

This project represents an advancement and expansion of a previous application, PlayVR, created in 2016 by prof. Fabio Pittarello and Eugenio Franchin [19].

3.1.1 Technical Specification

The application was built with Unity, and included the components to make it compatible with the Oculus Rift Headset9and the Leap Motion Controller.

3.1.2 Features

Accessing the virtual world of the application, the users could explore a 3D version of the theater “La Fenice” in Venice10.

Thanks to the capabilities of the Leap Motion controller11, the users could move in the virtual space with a number of deictic gestures12:

• "pointing with index finger" resulted in the avatar moving forward;

• "pointing back with the thumb" resulted in the avatar moving backwards;

• "pointing left or right with the thumb" resulted in the avatar rotating.

The users could also teleport, with the Hover UI Kit13 Unity component, to various places in the

the-ater(see Figure 9).

In the Royal Box, there was a 3D object similar to a bookstand, and on top of it the users could swipe the pages of an open book to view the scenery and listen to Act I and Act II of some operas (see Figure 10).

9Head-mounted display developed by Oculus VR, with similar characteristics to the HTC Vive, but different functionality: the users cannot walk with their legs to move in the virtual environment and are forced to play in a sitting or standing position.

10Located in Sestiere San Marco, inaugurated in 1792, "Gran Teatro La Fenice" is the main lyrical theater in Venice 11see reference in Chapter 2

12defined in the article [18] as "pointing gestures for indicating persons, objects and directions". Another category are "manipulative gestures".

(20)

Figure 9: PlayVR: moving through the theater (source: "Experimenting with PlayVR, a virtual reality experience for the world of theater" [18])

Figure 10: PlayVR: selecting the sceneries (source: "Experimenting with PlayVR, a virtual reality expe-rience for the world of theater" [18])

The user could take part in a comedy of the famous playwriter Carlo Goldoni, playing the part of "Lunardo", a character in the play that dialogues with "Maurizio" about the combined marriage of their son and daughter.

(21)

3.1.3 Goals

The goals of this application were three:

1. Allowing the users to get acquainted with the structure of a theatre through the direct exploration;

2. Allowing the users to access a repository of sceneries and to dynamically place them on the stage for exploration;

3. Allowing the users to act a fragment of a theatrical piece.

3.1.4 Study

This virtual reality experience was used by Prof. Fabio Pittarello [18] to study different parameters of the user experience through two exploratory studies; the first one was done during European Research Night on September 29, 2016, asking different visitors to try the application and fill a paper questionnaire based on closed and open questions.

The second study was held in spring 2017 with students from the History of the Stage course for the Master Degree in Cultural Heritage, who again had the the opportunity to try the experience and fill in an expanded version of the first questionnaire.

All questionnaires included questions using the five-points Likert scale14for collecting the answers. The studied parameters were Engagement, Presence and Embodiment.

3.1.5 Results

According to the questionnaires, the user Engagement was very high (see Figure 11) and parameters like “Will to reuse it", "novelty" and "emotional involvement” got the best scores.

Positive scores were obtained also for “ease of use", "aesthetic quality" and "cognitive involvement. The scores were a bit lower because of some technical difficulties in the tracking of the hands by the Leap Motion device, and the low graphics quality of the 3D models.

Embodiment was analyzed asking the users what they perceived on the different areas of their body (see Figure 12): the results were higher for the arms and the head. This can be explained because there were no virtual counterpart for the lower parts of the body. This represents a confirmation that having a visible counterpart is very important for the sense of presence in the virtual world.

Also, manipulative gestures associated to actions like look around, teleport and change scenery had better results than deictic gestures (associated to the action of walking and turning ).

The results were also very useful for future development: the users expressed interest for manipulating real objects and using hand gestures. Other questions about the presence of connecting cables showed

14The Likert scale uses positive statements to which the surveyed express their level of agreement, varying from 1 (strongly disagree) through 5 (strongly agree).

(22)

Figure 11: PlayVR: box-plots results for engagement (source: "Experimenting with PlayVR, a virtual reality experience for the world of theater" [18])

Figure 12: PlayVR: means and median scores for embodiment in the different areas of the body (source: "Experimenting with PlayVR, a virtual reality experience for the world of theater" [18])

that the constraints were felt as an anchor that linked them to the real world, as for the physical objects in the background.

Finally, some users expressed the desire to take a break sitting on one of the theater’s virtual chair, showing that evaluating the perceived workload (as for physical fatigue) is also important.

(23)

3.2

Design Choices

3.2.1 Requisites

The goal of this work was a complete redesign of the previous experience, addressing the points of weakness outlined in the previous studies and evaluating the results in relation to an extended number of parameters that define a VR user experience.

The requisites we derived from results of the previous project [18] are the following:

1. Giving the users the possibility to have an experience without the constraint of cables;

2. Finding a navigation system in the virtual world more that is more similar to real-life;

3. Finding an effective way to map additional actions, such as the selection of locations and the dynamic visualization of sceneries, to the user interface.

Related thesis The development of the other functionality of the application, acting in a play with virtual actors, has been treated in another thesis’ project, developed by Veronica Pagini: "Learning how to perform on stage: design and evaluation of an educational virtual reality application".

3.2.2 Design of the Application

As mentioned in the Introduction, the focus of this thesis is to analyze the effect of the application of different interaction techniques to the parameters of the user experience on in a e virtual reality environment.

We designed the application basing on the requisites listed in the previous section, choosing the proper hardware components.

In response to the first requisite, we selected the state-of-the-art technology of the HTC Vive.

To comply with the second and the third requisites, taking advantage of the capabilities of this device and the Leap Motion Controller, this work compares two different navigation modalities (the first one based on deictic gestures and the other one based on the movement of the real legs) and two different interaction modalities (the first one based on the use of physical controllers, the other one on the use of hand gestures).

These different modalities have been the starting points for creating three different user experiences, which are listed below and described in Figure 13 describes the three versions.

Experience 1: Gesture based interaction, without positional tracking The first application is designed to work with HTC Vive and Leap Motion, and it’s characterized by the use of hand gestures for all the actions, including the motion.

(24)

Figure 13: Recap of the three experiences

Experience 2: Controller based interaction, with positional tracking The second application uses the lighthouse tracking system15 to allow the users to move in the virtual reality by walking with

their legs, and the controllers to explore even further. As explained in Chapter 2, this is HTC Vive’s default method of navigation.

Experience 3: Gesture based interaction, with positional tracking The third application is an innovative design: the HTC Controllers are entirely replaced by the users’ hands for most of the actions. The users walk with their own legs in a limited play area (as for experience 2)..

3.2.3 Application Functionalities

During this design phase, three core functionalities have been identified:

1. Allowing the users to move in the theater (Navigation);

2. Creating an interactable interface to let the users teleport to other explorable areas (Teleportation);

(25)

3. Creating an interactable interface to let the users select a play with a scenery they can interact with, learn from and explore (Scenery Selection).

The following sections will describe the choices we made for the implementation of these functionalities.

3.2.4 Navigation

As specified in the previous chapters, an user moves in a typical HTC Vive-compatible application by walking with his legs in the real world and using the controllers to move the playable room area to a different location in the virtual world.

Instead, in the first implementation of PlayVR [19], the users moved through the virtual world using hand gestures.

In this thesis we propose a different navigation modality where the users can move in the virtual world by walking with their own legs, but instead of using the controllers to move the playable area they use their hands.

In the resulting design, the players can use their index finger to project a laser in the point they wish to move to in the virtual world, and then with a complementary gesture (i.e. thumb extension) they are ported that location (translating the playable area around them).

Overall, the resulting gesture for this action can be defined as “pointing with the index finger and extending the thumb” (see Figure 14).

Figure 14: The gesture of pointing with Leap Motion’s Orion

This gesture was chosen for multiple reasons. First of all, the software of the Leap Motion Controller is extremely good at recognizing when the hand is open and when a particular finger is extended. The APIs have suitable functions for this.

Therefore another similar gesture, like “mimicking a gun with finger and thumb”, was discarded because it’s difficult to detect for the “occlusion problem”16(the palm of the hand covers the fingers and the Leap

(26)

Motion is unable to see them.

Besides, that gesture of pointing is natural for the users, particularly when pointing to a location. The last reason is that this gesture allows us to identify very clearly a specific position, so the precision of the result increases.

Virtual Hands The Orion software lets the developer choose from different hand models.

According to the study by Argegualet et al.[9], to enhance the user’s sense of agency17in virtual reality, hands (the only visible part of the avatar) should not be realistic.

Results from this experimental study (with three different models, from highly realistic to stylized) showed that the sense of agency is stronger for less realistic virtual hands and also provides less mismatch between the participant’s actions and the animation of the virtual hand.

For this reason the model “rigged-hands” was chosen, and asides from the representation of the hands it also includes the forearms (see Figure 15) to improve further the user embodiment.

Figure 15: Leap Motion’s "rigged-hands" (source: leapmotion.com)

3.2.5 Teleportation

In the previous experiment’s results, the teleport actions were implemented using the Hover UI Kit’s HoverCast18, which allowed to manage a multilevel radial menu for selecting the locations of the theater.

Because of the positive feedback received we took advantage of the same Kit, updated by the creator to the latest versions of the Orion software.

Additionally in this work an alternative of this interaction mechanism has been implemented using the HTC Vive’s controllers (see Figure 16).

In both cases, the users are presented with a number of options representing the areas of the theater they can teleport to. They can choose these options by selecting them with the right index finger (or the tip of right controller).

Figure 18 shows the hierarchical structure of the selections available to the users. The dashed area underlines the different radial menus.

accuracy significantly decreases. Source: http://blog.leapmotion.com/6-principles-of-interaction-design/ 17a subcomponent to embodiment, see Chapter 2 for the definition

(27)

Figure 16: The HoverCast menu on virtual hands vs controllers (source: github.com/aestheticinteractive/Hover-UI-Kit )

3.2.6 Scenery Selection

The last functionality was the selection of the sceneries. In order to make the interaction easier and to avoid the user learning different interaction paradigms, it was decided to take advantage of this hierarchi-cal structure used for the locations. However, in this case we also took advantage of another interesting interface created by the developer of this UI Kit: the HoverPanel (see Figure 17). This interface allows the users to interact with a customizable number of panels floating in front of the camera. These panels are organized in rows, switchable with on-screen buttons.

Figure 17: The HoverPanel menu (source: github.com/aestheticinteractive/Hover-UI-Kit )

The panel structure was chosen for the selection of sceneries, organizing the available plays in a single panel and their acts in additional sub-panels, accessible by selecting the label "sceneries" from the initial radial menu.

Selecting the proper panel, the users could make the scenery appear on the stage and listen to the audio track containing the related music.

Figure 18 shows the overall hierarchical structure, underlining with a dashed blue line the different panels used for managing the scenes.

(28)
(29)

4

Implementation

This chapter describes in detail how the application was developed.

4.1

Resources

This section is about the resources used for the project.

4.1.1 Hardware

The personal computer used for the development and to run the application was a MSI VR One: a gaming notebook built for virtual reality, featuring a fast NVIDIA GTX 1070 GPU, that can be worn as a backpack (see Figure 19).

The virtual reality headset we used was an HTC Vive19.

The combination of these two devices allowed the users to try an experience in a virtual environment without the constraint of cables, as the MSI computer features two batteries that offer up to 1.5 hours of non-stop running [1]. An additional battery pack was used for the testing.

Figure 19: MSI VR One and HTC Vive (source: msi.com)

Finally, a Leap Motion Controller20was employed as an alternative to the use of the HTC Vive controllers

for the detection of the hands, as it can be attached to the headset (see Figure 20).

4.1.2 Software

The MSI VR Ready computer runs Windows 10 Pro.

The project is built with Unity 3D version 5.6.0f 3, that is (as explained in Chapter 2) one of the main tools for building virtual reality applications.

19Described in detail in Chapter 2 20Also described in detail in Chapter 2

(30)

Figure 20: Attaching the Leap Motion Controller to the HTC Vive (source: blog.leapmotion.com)

The software used to manage the HTC Vive and its capabilities is Steam VR21, available from Valve’s Steam. It connects with Unity and allows developers to run their applications through the headset.

4.1.3 Assets

The application was completely created from the ground up; however, the 3D models of the theater and the sceneries created for the first implementation of PlayVR [19] were available for the development.

4.2

Development

This section explains and describes the steps taken for the development of the application.

4.3

Platform

To start building a 3D program in Unity the developer must create an empty project [6]. The interface of this program is shown in Figure 21.

Figure 21: The interface of Unity 3D (source: docs.unity3d.com)

(31)

The two most important elements of Unity are the Scenes and the GameObjects.

A scene contains the environment and the menus of the game, while a GameObject is each object contained in the game.

In Unity’s interface, the "Project Window" displays the assets (the files used in the project, as 3D models, audio files or images), whereas the "Hierarchy Window" shows a list of the GameObjects in the scene. The "Inspector Window" shows the properties of the selected objects and the "Toolbar" lets the developer play or pause the created game, visible in the "Scene View".

To create a game, the designer has to assign properties to the components of the GameObjects; each component represents the object’s characteristics.

This operation can be done by interacting with the program’s interface, using the basic built-in compo-nents, but if the developers want to extend an object’s functionality they have to create a script. A script, written either in JavaScript or C#, is useful to let the game respond to input from the player and it handles the events in the gameplay.

Every script makes a connection with the internal workings of Unity by implementing a class which derives from the built-in class called MonoBehaviour. Two functions are defined inside this class, and may be extended by the developer to handle the application:

• the code inside the Update function will be executed at every frame update for the GameObject. It’s often used for movement, triggering actions or responding to the user’s input;

• the Start function, instead, is used for any initialization. Another similar function is Awake.

4.3.1 Setup

The development of this project started with an empty scene. The first step was the setup of the environment: importing the 3D models of the theater into Unity and resizing them correctly. Figure 22 shows the 3D model of the theater.

Figure 22: The 3D model of "La Fenice"

Every GameObject has a component called Transform that is used to set the position, the size and the rotation of the models. The game scene was created by setting up the transform component for each imported element of the virtual world (see Figure 23).

(32)

Figure 23: Unity’s "Scene" panel showing the view from inside the theater, and the transform component of the selected 3D models

After configuring the environment, the next step was to set up the player’s character. This object represents the user, from its camera view to the models representing the hands and the controllers. This GameObject is provided by Leap Motion for the experiences that use hands gestures and by Steam VR for the one with controllers. The first one is called LMHeadMountedRig and controls the camera, the player and manages the hand detection. The other one is called CameraRig and has the same capabilities as LMHeadMountedRig, but it works with the the controllers.

4.3.2 Navigation

When the set up of the character and the environment were complete, the first thing that needed to be implemented was the movement system, that allows the users to explore the theater. During the design phase, we chose to develop three different systems.

Gesture based interaction, without positional tracking The first system is the one where the user moves using gestures, as in the previous work [18].

Three gestures needed to be implemented:

• "pointing with index finger" to move the avatar forward;

• "pointing back with the thumb" to move the avatar backwards;

• "pointing left or right with the thumb" to rotate the avatar.

To create this movement system, we needed to use a script that was attached to the object representing the right hand. The full code is available in Appendix D as PreviousMovementSystem.cs. This code, by tracking the gestures of the right hand, lets the avatar move around in the environment.

First it checks at every frame if the hands are present in the view. If only the right one is there, it tracks the index and the thumb direction. After this, if the palm is pointing down and only the index finger is extended it moves the avatar forward. Else, if the palm is tilted and only the thumb is extended it moves

(33)

the avatar backwards. In the last case, if the palm is pointing up or down and the thumb is extended, it rotates the avatar clockwise or counterclockwise.

Finally, because in this first implementation we didn’t want to track the motion of the user’s body (i.e. the legs), the last thing to add was stopping the HTC Vive to track the position of the user in the play area. This was done by using a variable present in UnityEngine.VR , called InputTrack-ing.disablePositionalTracking. Once it’s set to true, even if the users move by walking with their legs, their position in the virtual reality remains the same.

Controller based interaction, with positional tracking This movement system was easier to implement, because it’s the main system used by the HTC Vive. The device lets the users navigate by walking with their own legs, and uses the controllers to move the defined play area in another place of the virtual environment to explore it.

The tracking of the position of the users is automatically done by the HMD. To implement the moving of the play area, we followed a tutorial by Eric Van de Kerckhove [21]. This code makes a laser appear from the tip of the right controller when the trackpad is pressed and points to a specific location, and when the button is released it teleports the avatar there along with the play area. Figure 24 shows this system.

Figure 24: Navigation with the HTC Vive controllers

Gesture based interaction, with positional tracking The next step was to implement the previous system without the use of controllers. The idea, as specified in the design phase, was to make a laser appear from the index finger, and move the play area by "pointing with the index finger and extending the thumb". The code for this part is available in Appendix C as LaserFinger.cs. The first step in the code is checking at every frame if the hands are present. If only the right hand is in the visual, it checks if only the index finger is extended. When these conditions are verified, the function Physics.Raycast calculates the hit point of a ray generated from the tip of the finger and pointing to its direction. A visible red laser is then created by code, with the function Lerp, and when the thumb is detected as extended,

(34)

the position of the avatar is moved to the hit point.

An interesting feature of this code is that the laser appears only if it is pointed on the floors of specific areas that are free of furniture (for example the chairs). This was done by creating a layer mask : a customizable area that specifies where the player can move to (by using the laser). Figure 25 shows the outline of the layer mask in the stalls area.

Figure 25: The layer mask in the stalls area ("Sala")

4.3.3 Teleportation

After completing the three different movement systems, the next feature to implement was the telepor-tation across the different areas of the theater.

The code for this part is the same for each one of the experiences, but it’s setup differently in Unity for the use of hands or controllers.

The choice for the teleportation menu, as described in Chapter 3, was the Hover UI Kit. The HoverCast interface is attached on the palm of the left hand, and can be interacted with the index finger of the right hand. This interaction mechanism works with a structure of nested GameObjects, visible in Figure 26.

The HoverKit/Cursors contain the code for every part of the hand, while the CursorRenderers contain the data for the interactable parts (showed as a white circle). The Hovercast contains each row of the menu and every item on a row. These items, when selected, can trigger an event, that we programmed in a script called TeleportToPoints.cs (available in Appendix C).

This code is divided in different functions, each one of them called when the user selects one of the options in the teleportation menu (see Figure 27).

Each function moves the avatar in the position corresponding to an empty GameObject located in the different areas of the theater. This script interacts with another one, that sets up the layer mask for each area of the theater after a teleportation, allowing the users to explore it by walking with their legs and to use the laser to move even further.

(35)

Figure 26: The structure of the HoverCast GameObject

Figure 27: Subjective view of the teleportation menu

Another functionality that was implemented was the possibility to listen to audio tracks describing the explored areas. For this, it was used the Unity function OnTriggerEnter : when a GameObject (the trigger) collides with another object, the portion of code inside this function is executed.

In every area it was placed an empty GameObject that, when in collision with the player, triggered the correct audio track.

Lastly, this code was set up to work with controllers by substituting the LeapMotion InputModule compo-nent, that connects the interface with the device, with the Vive InputModule. Everything works exactly the same, with each controller acting as a hand (see Figure 28).

(36)

Figure 28: Subjective view of the teleportation menu with the HTC Vive controllers

.

4.3.4 Scenery Selection

The final activity that needed to be added to the application was the possibility for the users to select a scenery from a menu, make it appear on the stage and explore it. As specified in the first section of this chapter, the sceneries were available from the previous project [18], and they were realized by the students of the Fine Arts Academy of Venice.

The first step was adding these 3D models as GameObjects and positioning them on the stage.

After that, we added the "scenery menu"; during the design phase we chose to use the other interface developed by the creator of the Hover UI Kit, HoverPanels22.

This component is also available as an asset, and must be placed as a child (nested element) of the player’s camera. This way, when the panels are active, they appear in front of the the user’s eyes. Figure 29 shows the hierarchy of this element.

Figure 29: The structure of the HoverPanel GameObject

As shown in the picture, Hoverpanel is a child of the previously mentioned LMHeadMounterRig’s first child, CenterEyeAnchor, that is the player’s camera. Hoverpanel’s children are the rows representing the

(37)

sub-panels: the first one, InvisibleRow, is always active but it’s empty and transparent. When the panels are activated with an event, that is selecting the "scenery selection" option from the HoverCast menu, the second row, RowAPanel, appears with a transition (set up by code) and shows the list of available plays.

When a specific play is selected with the right index finger (or right controller in experience 2), RowAPanel disappears and the correct one fades in, showing all the available options for that opera: listen to an audio description or showing the scenery of its act on the stage.

Figures 30 and 31 show an example where the user selects "Madame Butterfly".

Figure 30: Subjective view of RowAPanel Figure 31: Subjective view of RowMadameButterfly

When "Audio Introduttivo" (introductive audio description) is selected, an event is triggered that plays the correct audio file. When an act of a play is selected instead, three events are triggered: the first one activates the scenery’s GameObject using Unity’s function SetActive, the second one plays another audio file that contains the music of the opera, and the last one shows an animation that closes and opens the curtains.

This animation was created from the visual editor, and it shifts the two decorative curtains left and right. The complete visual effect shows the curtains closing over three seconds, when they are closed the correct scenery activates, and then they open again, while playing the audio track. Figure 32 shows this animation.

(38)

5

Experimental Design

In this chapter the design of the testing experiences will be explained starting from the studied variables, followed by the description of each experience in detail and the design of the questionnaires.

5.1

Observed Variables

The book "Human-Computer Interaction An Empirical Research Perspective", by Scott MacKenzie [16], gives a definition of the experimental variables, independent and dependent.

The first kind are "circumstances or characteristics that are manipulated or systematically controlled to a change in a human response while the user is interacting; they are called independent because they cannot be influenced". Each test condition represents a different level.

Dependent variables are "any observable, measurable aspect of human behavior".

According to "Measuring The User Experience"[8], independent variables are the things you control, while dependent variables are the things you measure. Based on these definitions, we have defined the variables for this project.

5.1.1 Independent Variables

The independent variable studied is the interaction method in the three applications, in particular its two subcomponents: navigation and interaction. Each different navigation system and related interface represents a level :

1. Experience 1: Gesture based interaction, without positional tracking;

2. Experience 2: Controller based interaction, with positional tracking;

3. Experience 3: Gesture based interaction, with positional tracking.

5.1.2 Dependent Variables

The dependent variables are the observed parameters of the user experience:

1. Embodiment;

2. Presence;

3. Engagement;

4. Immersion;

(39)

The effect of the independent variables on these parameters was observed using questionnaires and direct observation. For each dependent variable we selected from the literature a specific definition, explained in detail in the second chapter of this thesis.

For the sake of clarity we resume in short the definitions considered for this work.

Embodiment It is the “subjective experience of using and ‘having’ a body"[14], with the three sub-components Self-location, Agency and Body Ownership.

Presence Presence is "the subjective experience of being in one place or environment, even when one is physically situated in another"[23]. The analyzed factors are control, sensory, distraction and realism.

Engagement This variable is defined basing on the six-point definition of O’Brien [17], with the factors being cognitive involvment, ease of use, aesthetics quality, will to try again, novelty and emotional involvement.

Immersion Immersion’s main definition is "a psychological state characterized by perceiving oneself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli and experiences"[17].

Perceived Workload For this variable it’s used the NASA Task load Index [13], with the parameters of mental demand, physical demand, temporal demand, overall performance, frustration level and effort.

5.2

Comparison

When evaluating the results, two comparisons will be made between the three different applications, to highlight different aspects:

1. Gesture based interaction, without positional tracking vs Gesture based, with positional tracking: both applications use gestures. In the first one the users cannot move by walking in the world, they move only in the virtual word using hand gestures. In the second one the users are free to move with their legs in a limited area, and gestures are used to relocate the area of the virtual environment they can explore.

2. Controllers based interaction, with positional tracking vs Gesture based interaction, with positional tracking: both applications let the user move around by walking in a limited area. In the first one the users perform the rest of interaction with controllers, in the second one they use their hands.

(40)

5.3

Permutations

As specified in Scott MacKenzie’s book [16], because the levels of the factor in the experimental design are assigned within-subjects23, where all user test all three versions, there may be an interference in the results based on the learning and fatigue effect. This means that the last version of the application that the users try may be evaluated differently because they are getting better with the commands, or because they start to feel tired.

In order to avoid this, we applied a counterbalancing practice: the conditions were evaluated in dif-ferent order for each group of students. Being that there are three levels of the variable, the possible permutations for experience 1, 2 and 3 are six, as shown in Table 1.

Permutation Experience First Permutation: 1 2 3 Second Permutation: 1 3 2 Third Permutation: 2 1 3 Fourth Permutation: 2 3 1 Fifth Permutation: 3 1 2 Sixth Permutation: 3 2 1

Table 1: Temporal sequence of the experiences

5.4

Guiding the Users

In the three experiences a tutor guides the users in their experience, suggesting what they should do. At the end of each experience, the users are assigned three tasks to understand what they learned and to make them test their abilities.

A script, containing the instructions to communicate to the users, was prepared before the experiment as a guide for the tutor, for avoiding differences in the experiences for the different users. It’s included in Appendix A.

5.5

Location and Setup of the Study

The location of the experiment was the "New Cambridge Institute" (pictured in Figure 33), in Romano d’Ezzelino (VI). The building is a Venetian Patrician Villa, Ca’ Cornaro, built in sec XV.

Two classes of third year students from the course of Scientific and Linguistic studies, were available for the testing, for a total of 18 students that tried the experiences.

The majority of the users were 16 years old. 12 of them were male, the other 6 female.

(41)

Figure 33: The New Cambridge Institute (source: "Il Giornale di Vicenza", 15/07/2016, www.ilgiornaledivicenza.it )

The room, located at the second floor of the building, was set up as in Figure 34, with the two HTC Vive lighthouses for tracking the position of the users in opposite sides forming an area of 3.5 x 3.5 meters. The students interacted inside this playable area, while the tutors were assisting and guiding them from the desks. Videos of the room during the experiences were taken with an smartphone. A tablet was used for accessing remotely the personal computer running the experiences and recording the point of view of the user in the virtual world.

(42)

5.6

Description of the experiences

Each user tried a permutation of the three experiences.

After each one of them, they took a break during which they were be asked to complete a questionnaire, to evaluate the dependent variables. The structure and design of the questionnaires is described in the following section.

Table 2 summarizes the three different experiences and their phases, showing an example using the first permutation. The description given in the following sections is also related to the first permutation.

Experience type Activity Duration

1: Gesture based interaction, without positional tracking Step 1: Navigation 2 minutes Step 2: Teleportation 4 minutes Step 3: Scenery Selection 2 minutes Step 4: Task Execution 3 minutes First questionnaire

2: Controller based interaction, with positional tracking Step 1: Navigation 2 minutes Step 2: Teleportation 4 minutes Step 3: Scenery Selection 2 minutes Step 4: Task Execution 3 minutes Second questionnaire

3: Gesture based interaction, with positional tracking Step 1: Navigation 2 minutes Step 2: Teleportation 4 minutes Step 3: Scenery Selection 2 minutes Step 4: Task Execution 3 minutes Third questionnaire

(43)

5.6.1 Experience 1: Gesture Based Interaction, without Positional Tracking

Step 1: Navigation The users wear the devices; they are standing, able to turn the head but not to walk around.

They start in the main room, of the theater, "Entrata", where they are given a first explanation of what they should do, and they hear a recorded voice welcoming them to the experience. After some initial adjustments, they learn to move around in the virtual reality (see Figure 35).

Figure 35: Subjective view of the navigation system, Experience 1

At first they look around, than they start mastering the different gestures:

1. Pointing forward with the index finger to move forward;

2. Pointing backwards with the thumb to move backwards;

3. Pointing left/right with the thumb to rotate the body left or right.

After some experiments, they move to the adjacent room, that is "Sala": the stalls area. They are invited to reach the center of the room, where they hear a second recording explaining the history of the room.

Step 2: Teleportation When the users are ready, they are invited to look to the palm of their left hand, where they can see the menu created with the HoverKit Cast component24(see Figure 36).

Using the right index finger, which is highlighted with a white circle, they can point to “teletrasporto” to view a list of all the explorable areas of the theater. The users start then navigating to all different areas, starting from "Palco d’onore" through "Buca d’orchestra" and finally "Palchetti". In each area they hear a description of its features and history, while they are able to move around exploring.

Step 3: Scenery Selection After the users have finished exploring the different parts of the theater, they are ready to move to the next step.

After teleporting again to "Palco d’Onore", the Royal Box, this time they open again the left hand menu,

(44)

Figure 36: Subjective view of the HoverCast menu, Experience 1

but they select "Scenografie". A panel appears in front of their eyes (Figure 37), listing the three plays that they can interact with: "La Boheme", "Macbeth" and "Madame Butterfly".

Figure 37: Subjective view of the HoverPanel menu, Experience 1

With their right index finger, they can select the first play and the next panel opens, where they select "Audio Introduttivo": an initial description of "La Boheme" starts playing.

When the audio track is finished the users move on selecting one of the available acts of the play, and they observe the stage’s curtains closing and opening again, revealing the selected scenery and playing the music related to the selected act.

They are then invited to teleport again to the stage and explore the performance space (see Figure 38).

Step 4: Task Execution When the users have spent some time exploring the stage, they are informed that they will be assigned some task, to test what they learned.

For each task they need to make use of the abilities gained during the previous steps: for three times they have teleport to a specific location and select a specific scenery of a play. The complete script is available in Appendix C.

(45)

Figure 38: Subjective view of the stage, Experience 1

First questionnaire After completing the tasks, the users are invited to take a break. They spend some time sitting down, and while they are resting they complete the first questionnaires, that surveys their opinions about this experience. When they are finished, they start the second experience.

5.6.2 Experience 2: Controller Based Interaction, with Positional Tracking

Step 1: Navigation The users put on again the devices, but this time they also grasp in their hands the HTC Vive controllers. When they enter the Virtual Reality, they see again in the first room, "Entrata". They start looking around using the visor and, after being suggested to do so, they discover that they can walk with their legs to move in the environment.

After reaching the limits of the virtual room, they see a light blue reticle: that’s the HTC Vive chaperone system25, that indicates the end of the walkable area.

To explore even further, they need to relocate the play area inside the virtual theater; to do so, they are invited by the tutor to look at the controllers in their hands (see Figure 39).

Figure 39: Subjective view of first room, Experience 2

Pointing the device to the floor in front of them and pressing and holding the trackpad button, a laser

(46)

appears from the controller indicating the point of the virtual world they can move to (see Figure 40).

Figure 40: Subjective view of the navigation system, Experience 2

Releasing the button, the users and the walkable area are relocated in that point.

While they learn how to use this new system, they reach and explore again the stalls area, "Sala".

Step 2: Teleportation When the users have mastered the navigation system, they are ready to learn how to teleport in this second experience.

When they look at the controllers, they can rotate the left one to see the same HoverCast menu (see Figure 41) that they saw in the previous experience. This time, to select "Teletrasporto", they have to use the right controller, which tip is highlighted with a white circle.

Each time they move to a different location (that are the same ones as before), they can again practice moving around by walking while they listen to the audio tracks.

Figure 41: Subjective view of the HoverCast menu, Experience 2

Step 3: Scenery Selection After visiting all the explorable locations of the theater, the users are invited to get back to "Palco D’Onore". From here, they can use again the left controller menu, this time to select "Scenografie".

(47)

42).

Touching a portion of the panel with the right controller selects a play. This time they are invited to select the second one "Macbeth", and listen to its description.

When the audio description has finished playing, they can select an act of the play, observe the scenery appear on the stage, and then teleport there to explore it.

Figure 42: Subjective view of the HoverPanel menu, Experience 2

Step 3: Task Execution The last step of the virtual experience is the execution of the task; again the users are invited to carry out three activities, that consist in teleporting to a location and selecting a play to assist to. The structure is the same as the previous tasks, but the areas they have to move to and the sceneries to select are different (see Appendix C).

Second Questionnaire After completing the tasks, the users are again assisted while they remove the devices and take a short break. During this break, a second questionnaire (see Appendix B), similar to the first one but related to this second experience, is handed to them.

5.6.3 Experience 3: Gesture Based Interaction, with Positional Tracking

Step 1: Navigation For the last time (of course assuming that this is the last experience according to the order of the permutations) the users put on the devices; as in the first experience, they are hands free.

Once again, they start their experience from the first room, "Entrata". They look around, and are re-minded that in this experience they can also move by walking with their legs.

The limits of the walkable area are the same as in Experience 2, but to explore further this time they won’t use the controllers.

When the users look at their hands, they see, as in Experience 1, the virtual models tracking their move-ment. By pointing the index finger of the right hand, a laser will originate from that point highlighting the virtual location they can relocate to (in relation to the real space) for exploration (see Figure 43).

(48)

Figure 43: Subjective view of the navigation system, Experience 3

Extending the right thumb while pointing will execute this action, the same way as releasing the trackpad button in the previous experience. They can reach the stall area, "Sala", as before, and take some time to learn this different navigation system.

Step 2: Teleportation As before, when the users feel confident they are invited to look at their left palm, where they see a menu identical to the one in Experience 1, that they can select the same way with the right index finger (see Figure 44).

Figure 44: Subjective view of the HoverCast menu, Experience 3

Using this menu, and selecting "Teletrasporto", they can for the last time visit the explorable locations of the theater (see Figure 45).

Step 3: Scenery Selection When the users have finished exploring the locations, and they have returned to "Palco d’Onore", as in the other experiences they are invited to use again the left hand menu to select "Scenografie".

A panel identical to the one of Experience 1 appears (see Figure 46), where they select the last play, "Madame Butterfly".

(49)

Figure 45: Subjective view from the stage, Experience 3

Figure 46: Subjective view of the HoverPanel menu, Experience 3

After hearing the last audio description and selecting an act, the users teleport to the stage to explore the scene design.

Step 4: Task Execution When the users are finished exploring, they are asked for the last time to complete the tasks (see Appendix C).

They once again use the new interaction system to teleport to specific locations and select specific sceneries.

Third Questionnaire Once the task are completed, the users take off their devices and sit for a while, completing the last questionnaire (see Appendix B).

This time, an additional page is given to them, asking to leave a brief evaluation on the pros and cons of the complete three experiences and some suggestions for future development.

(50)

5.7

Questionnaire Design

The questionnaires were designed to evaluate the effect of the dependent variables on the three different levels of the independent variable, meaning the navigation and interface system. Each question is referred to an aspect of the parameters of the user experience.

The full questionnaires are available in the Appendix section.

5.7.1 Initial Questionnaire

The initial questionnaire was given to the users before trying the application, in order to assess their background.

They were asked specific questions, to gather information as age, sex and class.

The other questions were related to their previous experiences with virtual reality and theather, and were posed in the form of multiple choices and by using the five-points Likert scale.

This is the best method to gather the users’ opinion, according to the previously quoted book "Measuring the User Experience"[8].

5.7.2 Questionnaires related to the Experiences

As described in the previous section, after each experience the students were given a questionnaire (in total three of them) with the same structure but referring to the specific experiences. The questions contained in these questionnaires were the focal ones to evaluate the dependent variables.

All the questions were expressed as positive statements and referred to the the five-point Likert scale. They were divided and grouped according to each studied variable, and they are devised according to the definitions (given in Chapter 2 and summarized in section 5.1.2 of this Chapter).

Engagement To evaluate the Engagement, we asked the students to give separate answers for the the three activities: Navigation, Teleportation and Scenery Selection. The questions were devised so that each one reflected one of the six factors composing this parameter, as defined by O’Brien and Toms [17]. Table 3 reports the questions and the evaluated factors.

Embodiment This variable and the following were evaluated by asking questions referred to the whole experience, not divided by single activity. Table 4 reports the questions posed, each one created referring to a subcomponent according to the definition by Kilteni et al. [14]. The last question is useful to evaluate which part of the body the users felt more present in the virtual reality, as it asks to give an score to which part the users felt more present in the virtual reality.

Riferimenti

Documenti correlati

Box-plots represent the percentage of fungal dam- age, calculated according to the formula reported in Materials and methods section, when Candida cells were exposed to BV2 (a) or

In this paper, we address this issue proposing a ro- bust probabilistic latent graph-based feature selection al- gorithm that performs the ranking step while considering all

200 Tra le emergenze censite ne sono state scelte 10, in base a caratteristiche di accessibilità e omogeneamente distribuite sul territorio, che sono state oggetto di

The rest of the simulator group consisted of incarcerated inmates simulating incompetence to stand trial (≈ 10%) or non-patient simulators faking other disorders (≈ 6%).

Virtual Humans: A framework for the embodiment in Avatars and social perception of Virtual Agents in Immersive Virtual

Giampaolo Viglia The impact of yield management Impresa Progetto - Electronic Journal of Management, n.. Historical demand for rooms at the full fare and empirical

Simona Sica UOC Ematologia e Trapianto di Midollo, Universita’ Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A Gemelli, Roma, Italy Giovanna Valentini Servizio