• Non ci sono risultati.

Learning how to perform on stage: design and evaluation of an educational virtual reality application.

N/A
N/A
Protected

Academic year: 2021

Condividi "Learning how to perform on stage: design and evaluation of an educational virtual reality application."

Copied!
135
0
0

Testo completo

(1)

DEPT. OF ENVIRONMENTAL SCIENCES, INFORMATICS AND STATISTICS

Master’s Degree Programme

in Computer Science

“D.M. 270/2004”

Final Thesis

Learning how to perform on stage: design and

evaluation of an educational virtual reality

application.

Supervisor

Prof. Fabio Pittarello

Graduand

Veronica Pagini

Matriculation Number 845291

(2)

Dedicated to Leonardo and Serena, my companions in our very first Virtual Reality experience during our stay in Berlin.

(3)

Contents

1 Introduction 10

2 Virtual Reality 12

2.1 Brief History . . . 12

2.2 Virtual Reality Devices . . . 13

2.2.1 Oculus Rift . . . 14

2.2.2 HTC Vive . . . 15

2.3 Motion Tracking Devices . . . 15

2.3.1 Leap Motion Controller . . . 15

2.4 User Experience . . . 16 2.4.1 Engagement . . . 17 2.4.2 Embodiment . . . 17 2.4.3 Presence . . . 18 2.4.4 Immersion . . . 19 2.4.5 Flow . . . 19 2.4.6 Perceived Workload . . . 20 2.4.7 Perceived Learning . . . 21

2.5 Review of Existing Works . . . 21

2.5.1 Theatre VR . . . 22 2.5.2 To Be With Hamlet . . . 23 2.5.3 National Theatre . . . 24 3 Project Design 26 3.1 Previous Work . . . 26 3.1.1 Experience . . . 26 3.1.2 Results . . . 27 3.1.3 Future Directions . . . 28 3.2 Questionnaire to Actors . . . 29 3.3 Results . . . 31 3.4 Requisites . . . 33 3.5 Application Design . . . 33 3.5.1 Guidance System . . . 33

(4)

3.5.2 Haptic Feedback . . . 44

3.6 Tools Used . . . 46

3.6.1 Hardware . . . 46

3.6.2 Software . . . 46

3.7 PlayVR - Complementary Project . . . 46

4 Implementation 47 4.1 Overview . . . 47 4.2 Guidance System . . . 50 4.2.1 Visual Interface . . . 51 4.2.2 Scene Composition . . . 53 4.2.3 Recitation Script . . . 54 4.3 Haptic Feedback . . . 61 5 Experiment 63 5.1 Experimental Design . . . 63 5.2 Experiment Description . . . 67 5.2.1 Room Setup . . . 67 5.2.2 Experiment Structure . . . 68 5.3 Questionnaires Design . . . 78 5.3.1 Initial Questionnaire . . . 78 5.3.2 Experiment Questionnaires . . . 79 6 Analysis 84 6.1 Preliminary Results . . . 84 6.2 Experiment Results . . . 85 6.2.1 Engagement . . . 86 6.2.2 Embodiment . . . 87 6.2.3 Presence . . . 89 6.2.4 Immersion . . . 90 6.2.5 Flow . . . 91 6.2.6 Perceived Workload . . . 93 6.2.7 Perceived Learning . . . 94 6.2.8 Open questions . . . 95

(5)

6.3 Discussion . . . 96 6.3.1 Engagement . . . 96 6.3.2 Embodiment . . . 97 6.3.3 Presence . . . 97 6.3.4 Immersion . . . 98 6.3.5 Flow . . . 99 6.3.6 Perceived Workload . . . 99 6.3.7 Perceived Learning . . . 100 7 Conclusion 103

8 Appendix A: Questionnaire to Actors 109

9 Appendix B: Relevant Code 113

10 Appendix C: Initial Questionnaire 121

(6)

List of Figures

1 The Teleshpere Mask, 1960 . . . 12

2 Nintendo Virtual Boy, 1995 . . . 13

3 Oculus Rift and Oculus Touch Controllers . . . 14

4 HTC Vive Headset, Base Stations and Controllers . . . 14

5 Leap Motion controller . . . 16

6 Leap Motion controller mounted on the HTC Vive headset . . . 16

7 Theatre VR - Teaser 1 . . . 22

8 Roger Casey and Zachary Koval performing the play To Be With Hamlet . 23 9 Wonder.land installation at the National Theatre; credit: Richard Davenport 24 10 Mapping between real world gestures and virtual actions [25] . . . 26

11 Engagement results of the first and second study (F.Pittarello, 2017 [25]) . 27 12 Resulting scores of the actions performed by the users in the virtual world (F.Pittarello, 2017 [25]) . . . 28

13 Embodiment results (F.Pittarello, 2017 [25]) . . . 29

14 Actions in the virtual world associated to specific gestures (F.Pittarello, 2017 [25]) . . . 29

15 Impact of freedom and constraints (F.Pittarello, 2017 [25]) . . . 29

16 Actors’ professions . . . 30

17 Actors’ theatrical experiences . . . 32

18 Actors’ interest for theatrical genres . . . 32

19 Results on the importance of certain factors when learning the part for a comedy . . . 32

20 Sketch for the positional aid . . . 35

21 Sketch for the temporal aids . . . 36

22 Sketch for the feedbacks shown during the acting . . . 39

23 Sketch for movement instructions and positional aids . . . 40

24 Layout of the guidance system components during the user’s turn . . . 41

25 Layout of the guidance system components during the other actor’s turn . 41 26 Layout of the guidance system components during movement’s instructions 41 27 Blocking positions [20] . . . 42

(7)

29 Table used for the experiment . . . 45

30 PlayVR scene’s hierarchy . . . 47

31 Animation panel after the creation of the animations for the curtains . . . 49

32 Composition of the object representing the user in the virtual environment 50 33 Hovercast as seen by the user . . . 51

34 Hoverpanels as seen by the user . . . 51

35 The set of visual icons used for the guidance system. . . 51

36 Panel with the instructions for the user . . . 52

37 Time bar and golden stopwatch . . . 52

38 Position’s textual feedback and cue’s textual feedback, along with their respective icons . . . 52

39 Hovercast as seen by the user . . . 53

40 Hoverpanels as seen by the user . . . 53

41 Hierarchy of the GameObject representing the play. . . 54

42 Unity view of the stage with the scenery set up for the experience with guidance system . . . 55

43 Visual representation of the variables related to the user performance and of the five user’s performance cases related to the compliance with the cues 56 44 Virtual model of the physical chair . . . 62

45 Virtual model of the physical table . . . 62

46 Room layout and markers . . . 67

47 User prepared to start the experience . . . 70

48 HoverUI-Kit menu (Hovercast) . . . 70

49 HoverUI-Kit panels (Hoverpanels) . . . 70

50 User’s view during the play . . . 71

51 Layout of the experiment space during the experience with haptic feedback, without guidance system . . . 71

52 User exploring the room and touching the objects . . . 72

53 User’s view during the exploration of the real chair . . . 72

54 User trying to move like the character of "Maurizio" . . . 73

55 User’s view during the recitation as "Maurizio" . . . 73

56 Layout of the experiment space during the experience with haptic feedback, with guidance system . . . 73

(8)

57 User performing in the role of "Lunardo" . . . 74

58 Golden stopwatch showing due to the user is speaking over-time . . . 74

59 User has completed the cue and receives feedback about it during the turn of the other actor . . . 74

60 User looking at the positional circle on the floor . . . 75

61 User receiving instructions on the actions to be performed . . . 75

62 User reading the final textual summary . . . 75

63 User reading the final detailed feedback . . . 75

64 User assisting to the recitation between the virtual actors . . . 76

65 Layout of the entire room during the experience without haptic feedback, without guidance system . . . 76

66 User trying to touch the virtual table . . . 77

67 User performing in the role of "Maurizio" . . . 77

68 User’s view during the recitation in the role of "Maurizio" . . . 77

69 Total number of virtual reality devices tried by the students - Bar chart . . 84

70 Total number of motion control devices tried by the students - Bar chart . 84 71 Virtual reality devices tried from the students - Venn diagram . . . 85

72 Motion control devices tried from the students - Venn diagram . . . 85

73 Engagement results for the three experiences - Box-plots . . . 86

74 Embodiment results for the three experiences - Box-plots . . . 87

75 Body ownership results on different part of the human body -mean/median 88 76 Body ownership results on different part of the body - Box-plots) . . . 88

77 Presence (common questions) results for the three experiences - Box-plots . 89 78 Presence (specific questions) results for the three experiences - Box-plots . 90 79 Immersion results for the three experiences . . . 91

80 Flow results for the three experiences - Box-plots . . . 92

81 Perceived workload results for the three experiences - Box . . . 92

82 Perceived learning (common questions) results for the three experiences -Box-plots . . . 93

(9)

List of Tables

1 "Timings cases - feedbacks - colors" mapping . . . 44

2 "Positions cases - feedbacks - colors" mapping . . . 45

3 Permutation scheme of the experiences . . . 69

4 Initial questionnaire structure . . . 78

5 Engagement questions . . . 79

6 Embodiment questions . . . 80

7 Presence common questions . . . 80

8 Presence specific questions . . . 81

9 Immersion questions . . . 81

10 Flow questions . . . 82

11 Perceived workload questions . . . 82

12 Perceived learning common questions . . . 82

13 Perceived learning specific questions . . . 83

14 Experimental observations on the correctness of the users’ paths . . . 101

(10)

Abstract

This thesis is focused on the design of a virtual reality application for helping actors during the training phase of a play.

The project starts from the results of a previous project developed at the University Ca’ Foscari and from an additional field research, involving real actors, to evaluate the parameters of interest in an actor’s training. These results have been used as starting points for the design of an immersive virtual reality application, where different interaction techniques have been experimented for supporting the actors.

The resulting implementation, focused on a fragment of a famous play by Carlo Goldoni, has been tested with a group of high-school students, considering for the evaluation a set of variables which are meaningful in a typical VR user experience.

(11)

1

Introduction

In the past decade virtual reality technology improved considerably and it is continuing to grow exponentially.

Thanks to the availability of ever evolving VR devices, the researches in the field of virtual reality used as means for learning are increasing remarkably.

This thesis contributes to this field of research with the development of a virtual reality application focused on supporting actors, amateur of professional, during the training phase of a play.

The PlayVR project was started in 2016 by prof. Fabio Pittarello and it consisted in the virtual exploration of the Venetian theater La Fenice, and the possibility to perform as a character of a scene extracted from a play written by Carlo Goldoni.

The exciting results of the two pilot studies conducted on the previous project led to the start of this thesis in order to further improve this educational VR experience.

To establish the requisites for the new version of the PlayVR application, we considered the experimental results of the previous version of the application and the data gathered from questionnaires that we distributed to actors in order to gain a better understanding of the most important factors related to the learning of a part for a play.

The HTC Vive headset allowed us to take the virtual experience to a whole new level of immersion by letting the user walk freely in a room, without the constriction of cables, whereas Leap Motion provided us state-of-the-art technologies to track the hands’ gesture in a virtual environment, along with the opportunity to use a modern visual interface to interact with the virtual world.

We developed a guidance system specifically designed for aiding the users in learning the role of a character, and we implemented the possibility to have an haptic feedback by introducing real objects tracked and mapped in the virtual theater.

We tested the resulting application with several students of a local high-school and we analyzed the results with respect to a set of variables which are considered meaningful in a typical VR user experience.

This project was complementary with the one conducted by the graduand Leonardo Zuf-fellato, who proposed, developed and evaluated navigation and interaction systems to

(12)

explore the theater in the thesis "Design and comparison of navigation and interaction techniques for a virtual reality experience in the cultural heritage domain".

(13)

Figure 1: The Teleshpere Mask, 1960 (source: mortonheilig.com)

2

Virtual Reality

In this elaborate, when speaking about ’Virtual Reality’, we will refer to the definition given by Brooks et al.[4]: "By virtual environments, we mean real-time interactive graphics with three-dimensional models, when combined with a display technology that gives the user immersion in the model world and direct manipulation. Such research has proceeded under many labels: virtual reality, synthetic experience, etc".

This chapter will cover the explanation of the theory on which this elaborate is based on. First, there will be a brief overview on the virtual reality’s history, followed by an explanation of the main VR devices mentioned in this thesis. Then, we will describe what definition of user experience we adopted, along with an explanation of the user experience parameters analyzed in our study. Finally, there will be a review of the related existing works analyzed in the initial phase of this project.

2.1 Brief History

According to the Virtual Reality Society [35], the virtual reality early concepts can be dated long before the term was coined and defined.

The first attempts date back to the nineteen century, with panoramic paintings. Their purpose was to give the viewers the illusion of being in a place where they were not. In 1838 Charles Wheatstone set an important milestone by demonstrating that the brain processes the different two-dimensional images from each eye into a single object of three dimensions.

Edward Link created the first example of a commercial flight simulator in 1929, which was entirely electromechanical, while in 1939 William Gruber created the View-Master

(14)

Figure 2: Nintendo Virtual Boy, 1995 (source: everyeye.it )

principles of the modern Google Cardboard and similar devices.

In the mid 1950s Morton Heilig developed the Sensorama, a theatre cabinet that featured stereo speakers, a stereoscopic 3D display, fans, smell generators and a vibrating chair, intended to fully immerse the person in the film.

However, it was only in 1960 that the first head-mounted display was created. The device, invented by Heilig and called Telesphere Mask (Figure 1), was created for film vision, and it featured stereoscopic 3D and wide vision with stereo sound, even though it still did not provide interactivity nor motion-tracking.

From there, the development of the HMDs grew quickly. In 1961 there was the first attempt at motion-tracking, while in 1968 an headset was connected for the first time to a computer in place of a camera (Sword of Damocles, by I. Sutherland and B.Sproull). In 1987 Jaron Lanier, founder of the visual programming lab (VPL), coined the term Virtual Reality. The company also led major developments in the VR haptics area. It was just in the nineties however that the public started to have access to VR devices, with the SEGA VR headset and the Nintendo Virtual Boy (see Figure 2).

The greatest VR developments were achieved in these last years, starting with the Oculus Rift, followed by the HTC Vive, the Sony Playstation VR, and the more recent Samsung Gear VR and Google DayDream View.

2.2 Virtual Reality Devices

This section gives a brief explanation on the main virtual reality devices mentioned in this elaborate.

(15)

Figure 3: Oculus Rift and Oculus Touch Controllers (source: overclockers.co.uk )

2.2.1 Oculus Rift

The Oculus Rift [24] (see Figure 3) is a head-mounted display for virtual reality developed by Oculus VR. The society started a Kickstarter campaign to fund the development of the headset back in 2012, but the device became available to consumers only on the first-quarter of 2016.

The headset is intended for seated or standing experiences, provided that the user remains in the same position1.

From August 2017, the Oculus Touch Controllers can be associated with the headset, replacing the old Xbox One controller.

Notably, the Leap Motion controller, which will be explained in section 2.3.1, can be attached to the HMD thanks to a specific accessory.

The Oculus Rift was used as the headset of the previous project’s application [25], as it will explained in Chapter 3: Project Design.

Figure 4: HTC Vive Headset, Base Stations and Controllers (source: devfun-lab.com)

(16)

2.2.2 HTC Vive

The HTC Vive [12] is a headset for virtual reality, developed by HTC and Valve Corpo-ration, available to the public since April 2016.

The headset offers a "fully immersive first-person experience" [12] thanks to its innovative room-scale tracking technology.

Valve also developed a project called SteamVR [29], which includes a software that man-ages the interaction between the HTC Vive headset, its controllers and the computer. Using the Vive Base Stations (Figure 4) and following the Vive [13] or the SteamVR [30] guidelines, the user can set up the boundaries of the play area, and then move freely in the virtual room. The headset and the controllers movements are tracked thanks to the infrared pulses of the base stations, resulting into a very precise mapping in the virtual world.

Like the Oculus Rift, the Leap Motion controller can be attached to the headset thanks to a specific accessory (See Figure 6). The Vive headset and its controllers have been used to develop the project associated to this thesis.

2.3 Motion Tracking Devices

Motion tracking devices are devices that user accelerometers, infrared cameras and LEDS, or other sensors to track the movements of objects or body-parts, in order to send them as inputs to games or applications.

2.3.1 Leap Motion Controller

The Leap Motion controller [17] (Figure 5) is a hand-tracking USB device developed by Leap Motion, Inc. The controller is available since 2013, but in February 2016 it received a major update on its software, upgrading from V2 to Orion, specifically designed for hand-tracking in VR.

The device uses two monochromatic IR cameras and three infrared LEDs, and it can be mounted on virtual reality headsets, as shown in Figure 6.

(17)

Figure 5: Leap Motion controller (source edgy-labs.com)

Figure 6: Leap Motion controller mounted on the HTC Vive headset (source: blog.leapmotion.com)

2.4 User Experience

As definition of user experience, we adopted the one from Tullis and Albert [1], which is: "We believe the user experience includes three main defining characteristics:

• A user is involved ;

• That user is interacting with a product, system, or really anything with an interface; • The users’ experience is of interest, and observable or measurable".

The measurability of the user experience comes from the fact that it is composed by a series of factors of interest that change from time to time according to the type of study that it is conducted.

In our case, we decided to evaluate the following factors: • Engagement; • Embodiment; • Presence; • Immersion; • Flow; • Perceived Workload; • Perceived Learning.

(18)

2.4.1 Engagement

The definition of engagement that used in this elaborate comes from O’Brien and Toms [23].

In a previous study of theirs, the authors individuated a series of attributes that composed the engagement in software applications and, starting from those, they conducted two additional studies to understand which were the core attributes of engagement among all the ones previously identified.

According to their findings, engagement is composed from the following elements:

• Perceived Usability, also known as Ease of Use, that is, how much the user found easy to interact with the experience;

• Aesthetics, defined by Jennings[15] as the "Visual beauty or the study of natural and pleasing (or aesthetic) computer-based environments";

• Focused Attention (or Cognitive Load ), "The concentration of mental activity; con-centrating on one stimulus only and ignoring all others" (Matlin, 1994 [19]);

• Felt Involvement (or Emotional Load ), "The emotional investment a user makes in order to be immersed in an environment and sustain their involvement in the environment ", (Jennings, 2000 [15]);

• Novelty, which refers to "Features of the interface that the users find unexpected, surprising, new, and unfamiliar ", (Huang, 2003 [14]);

• Endurability, the user’s will to try again the experience.

The quoted definitions of the factors above were selected by O’Brien and Toms [23] them-selves, and they can be found at page 4 of the article mentioned above.

2.4.2 Embodiment

Kilteni, Groten and Slater [16] provide a working definition for the Sense of Embodiment in a paper they wrote, referring in particular to the sense of embodiment to artificial bodies.

Starting from the definition of de Vignemont [5] "E is embodied if and only if some prop-erties of E are processed in the same way as the propprop-erties of one’s body" and the one given by Blanke and Metzinger [3], which affirm that embodiment includes the "subjective

(19)

experience of using and ’having’ a body", the authors propose the following working defi-nition for the term ’Embodiment’: "[Sense of Embodiment] toward a body B is the sense that emerges when B’s properties are processed as if they were the properties of one’s own biological body."

Kilteni et al. further divide the embodiment in three sub-components, namely:

• Sense of self-location, defined by the authors as "a determinate volume in space where one feels to be located ";

• The sense of agency, that is, the sense of having "global motor control, including the subjective experience of action, control, intention, motor selection and the conscious experience of will ";

• The sense of body-ownership, which refers, according to both Gallagher [7] and Tsakiris, Prabhu & Haggard [33], to one’s self-attribution of a body.

2.4.3 Presence

For the definition of presence, we adopted the one elaborated by Witmer and Singer [37], since the questionnaires created by them are considered nowadays very useful to measure the sense of presence.

The given definition is: "Presence is defined as the subjective experience of being in one place or environment, even when one is physically situated in another ".

Furthermore, they list and explain a series of factors which contribute to the sense of presence: control factors, sensory factors, distraction factors and realism factors.

According to Witmer and Singer, these factors can be defined as follows:

• Control factors: "the more control a person has over the task environment or in interacting with the [Virtual Environment], the greater the experience of presence", (Witmer and Singer, 1998 [37]);

• Sensory Factors: "The more completely and coherently all the senses are stimulated, the greater should be the capability for experiencing presence", (Witmer and Singer, 1998 [37]);

• Distraction Factors: how much the user is isolated from the physical world and how much the user is able to ignore external stimuli and focus only on the VE ones;

(20)

• Realism Factors: the more consistent the information conveyed by a VE is with that learned through real-world experience, the more presence should be experienced in that VE (Held & Durlach, 1992 [10]).

The authors also define sub-scales for the questionnaires items that measure these factors: involvement/control, natural, auditory, haptic, resolution and interface quality.

In addiction to the factors mentioned above, we decided to analyze with more detail all of the sub-scales related to the human senses:

• The auditory sub-scale; • The haptic sub-scale;

• The "resolution and interface quality" sub-scale (which we considered as a visual sub-scale).

These sub-scales were considered particularly relevant since, as it will be explained in the next chapter, we introduced real objects in our application.

2.4.4 Immersion

In the same article [37], authors define ’Immersion’, in regard to virtual reality, as: "a psychological state characterized by perceiving oneself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli and experiences".

They further explain: "A VE that produces a greater sense of immersion will produce higher levels of presence. Factors that affect immersion include isolation from the physical environment, perception of self-inclusion in the VE, natural modes of interaction and control, and perception of self-movement ".

2.4.5 Flow

The Russian psychologist Mihaly Csikszentmihalyi started studying the phenomenon of flow in 1975 and he is considered the one who first introduced the concept of Flow. In 2009, he published a chapter for the book Oxford handbook of positive psychology [28], along with Jeanne Nakamura, in which they explain what flow is and what consists of. They affirm that the conditions for entering the state of flow are:

(21)

• "Perceived challenges, or opportunities for action, that stretch but do not overmatch existing skills,

• Clear proximal goals and immediate feedback about the progress being made.". They also explain that, once entered this condition, the state on which the subject finds him/herself is characterized by the following elements:

• "Intense and focused concentration on the present moment; • Merging of actions and awareness;

• Loss of reflective self-consciousness;

• A sense that one can control one’s actions; • Distortion of temporal experience;

• Experience of the activity as intrinsically rewarding".

2.4.6 Perceived Workload

The workload can be defined as the subjective amount of work that one has perceived during the execution of a task.

A multi-year research resulted in the development of the NASA-Task Load Index [9]. During this research ten workload-related factors were individuated and, among them, a subset was selected to represent the possible sources of workload:

• Mental demand; • Physical demand; • Temporal demand; • Performance; • Effort; • Frustration level.

With reference to the physical and mental demand, it is worth noticing that Unity’s guidelines2 [21] advise to pay attention to a phenomenon called VR Sickness, caused by

(22)

from vection3. If the chosen movement solution in virtual reality causes such a discomfort

to the user, it could end provoking strong nausea, which should be, of course, avoided as much as possible.

2.4.7 Perceived Learning

Finally, we considered the element of learning, denoted from Hamari et al. [8] as the solving of a problem. More specifically, when talking about ’learning’, they refer to what is really ’perceived learning’, that is, how much the subjects felt that they learned from an experience.

From the results obtained from their studies, the authors of the article found that engage-ment has a noteworthy positive impact on learning, along with challenge and skill. Drawing from the same flow theory that we mentioned above [28], Hamari et al. say that "perceived challenge and skills (the main two elements of flow) are hypothesized to predict engagement and immersion, which in turn are believed to predict perceived learning". The authors found that challenge had a direct impact on learning, but it also increased engagement, which in turn influenced the outcome positively. ’Being skilled in the game’ did not had a direct effect on learning, it improved it by increasing the engagement; however they did not find a significant effect between immersion and learning.

2.5 Review of Existing Works

During the initial research phase, we explored the existing works related to this project’s field.

We found that there is a lot of confusion about the term ’virtual reality’, especially when it is applied to the world of theater.

As a matter of fact, an article by Jessica Oaks [22] underlines three ways in which, in her opinion, people refer to virtual reality regarding the world of theater. According to her, three categories can be individuated:

1. Live virtual theater, in which organizations use virtual worlds to create live theatrical performances. An example is the New Media Arts’ Avatar Repertory Theater [2], a company that enact small-scale live events for virtual reality platforms like OpenSim

(23)

Figure 7: Theatre VR - Teaser 1 (source: theatrevr.net )

worlds and Second Life;

2. Live Theatre, Virtually, which is actually more similar to immersive theater, a the-atrical genre in which the spectator can feel more immersed in the play, but not thanks to the use of virtual reality as we intend it in this elaborate. An example is the Mr. & Mrs. Dream [31] dance show, which combines a traditional set with virtual reality effects in order to create a more impressive experience;

3. Fandom in a virtual world, in which companies bring high profile live performances to a broader audience using VR, like Theatre VR, mentioned below.

Among these categories, only the last one actually refers to virtual reality as it is was defined in the first section of this chapter. However, in our opinion, this category is only a subset of the VR projects related to the world of theater. As a matter of fact, other works are not focused on the audience, but instead on functionalities useful for actors, either amateur or professional.

The following sections summarize some of the most relevant works related to virtual reality applied to the world of theater as we defined it, with regard to projects addressed both to an audience and to actors.

2.5.1 Theatre VR

Theatre VR is a virtual reality project developed by the team lead by Jindřich Skeldal [32]. The game allows the user to choose a play and act in the role of a character, while the other virtual actors are driven by computer.

The project is reported to be an experience that can be shared with friends or strangers. However, it is still at its prototype stage, and it is said that it will be available in a few months. It is not known which the target devices will be, but the teaser videos show a

(24)

Figure 8: Roger Casey and Zachary Koval performing the play To Be With Hamlet (source: engineer-ing.nyu.edu)

user wearing the HTC Vive headset (see Figure 7).

In the meanwhile, Theatre VR won the first Indie prize at the November 2017 Game Developer Session, bringing a demo which included the last scene of the Hamlet play.

2.5.2 To Be With Hamlet

A very recent article published on The NYU Tandon School of Engineering website [27] reports that a team lead by Javier Molina, graduate and current adjunct assistant pro-fessor of Integrated Digital Media at the school, is developing a live theater performance in virtual reality titled To Be With Hamlet, inspired by Janet Murray’s book Hamlet on the Holodeck: The Future of Narrative in Cyberspace.

The team has developed the virtual skeletons and the avatars of the two team mem-bers which play the roles of Hamlet and the Ghost. To build these models, their bodies have been scanned with a Sense 3D scanner and then photographed in various lighting conditions (see Figure 8).

To track the actors’ movements they used motion-capture cameras, body suits, and mark-ers. These inputs are then said to be transmitted onto the virtual avatar and the virtual castle grounds.

The audience members can view the performance, in loco or remotely, by wearing HTC Vive headsets.

During the writing of the article, the team was working on tracking the actors’ facial expression and on solving the problem of the actors which don’t actually wear an headset, making it impossible for them to interact with an audience.

(25)

Figure 9: Wonder.land installation at the National Theatre; credit: Richard Davenport (source: tele-graph.co.uk )

2.5.3 National Theatre

An article of the Telegraph reports that the National Theatre’s Immersive Storytelling Studio is working to several experimental VR projects applied various aspects of the the-ater.

The first example is fabulous wonder.land [38], a virtual reality music video developed for the musical wonder.land, created by Damon Albarn, Moira Buffini and Rufus Norris, and inspired by Lewis Carroll’s iconic Alice in Wonderland.

At the National Theatre, there is a Wonder.land installation which allows the visitors to experience first-person part of the musical.

Another project from the Studio is the development of 360 degree films to help the actors get in character.

The first time these films were used, the cast of the play The Plough and the Stars was made wearing an Oculus Rift headset to live the Easter Rising in first-person, through the point of view of a young man took part in a violent armed insurgency in Dublin 1916. The actors who tried it were enthusiastic, reporting that it was "one of the best ways" to understand the viewpoints of people who lived through history.

Similar to the fabulous wonder.land project, we found that many other theaters are work-ing on 360 degree films or live performances that can be viewed by the users thanks to

(26)
(27)

3

Project Design

3.1 Previous Work

The project developed for this thesis is the continuation of the project PlayVR, started in 2016 by Fabio Pittarello and Eugenio Franchin [26].

The application consisted in the virtual exploration of the venetian theater La Fenice and in the recitation of a scene of the play The Rusteghi, a comedy written by Carlo Goldoni. The project was developed for the Oculus Rift headset, with the addition of a Leap Motion controller in order to capture the users’ hands’ movements.

3.1.1 Experience

The experience was divided in two parts. In the first one the user could explore in first person the theater and some scenographies thanks to specific gestures that were designed with the goal of enhancing the embodiment and the sense of presence.

In particular, the user could teleport to different areas of the theater using the manip-ulative gesture of touching a virtual menu, and then he/she could explore the area by using both manipulative and deictic gestures in order to walk around and turn the point of view.

More precisely, in the following article [25] F.Pittarello reported the gestures that were used in the application, shown in Figure 10.

Figure 10: Mapping between real world gestures and virtual actions [25]

(28)

Figure 11: Engagement results of the first and second study (F.Pittarello, 2017 [25])

hands; manipulative gestures instead are the ones which act on an object by touching it in some way, for example by grabbing, moving, or stretching it, mimicking the gesture on the real object.

In the second part the user could play the part of one of the characters of a scene taken from the play The Rusteghi. In this experience the cues of the characters were right in front of the vision line of the user and they were presented as simple white strings of text. Moreover, the user had a predefined amount of time to complete the cue. There have been two experimental studies in the work reported by F.Pittarello [25]: the first one was held with 11 volunteers which attended an event in Venice connected to the European Research Night, in the autumn of 2016, while the second one was held in the spring of 2017 with 5 students of the History of Stage course of the Master Degree in Cultural Heritage. These pilot studies tried to investigate the effect of the developed techniques on embodiment [16], engagement [23] and presence [16].

3.1.2 Results

According to the authors’ analysis, the engagement obtained great scores in both studies, particularly on will to try again, novelty and emotional involvement, as we can see from Figure 11. This was reported as a very encouraging result.

In particular, regarding the various activities that the users performed in the virtual world, we can clearly see from the reported results (Figure 12) that they demonstrated very high interest in watching the play and acting first person in it; this result in particular drove the development of the project associated to this thesis.

(29)

Interestingly, in the survey filled in by the participants of the second pilot study, it was possible to deepen the understanding on the embodiment and on the sense of presence. For example, the authors say that legs and feet obtained very low scores due to the fact that there were no visible counterparts in the virtual world, unlike the hands and the arms. As a matter of fact, these latter body parts obtained positive results. The trunk obtained also great scores, possibly because, even though it cannot be seen in the virtual world, its presence can be perceived by the consequences of its movements. However, the head and eyes obtained the greatest scores, thanks to the mapping of real-world head movement to the virtual world. See Figure 13 to view the embodiment’s results.

Among the gestures used to perform actions in the virtual reality we can see from Figure 14 that the ones that were more appreciated by the users were the one associated to the actions look around, teleport and change scenery. These were the actions that required manipulative gestures, whereas deictic gesture were less appreciated.

Finally, we can see in Figure 15 the impact that the presence of connecting cables, physical objects in the surroundings and the need of wearing an HMD had on the sense of presence.

3.1.3 Future Directions

Following the results of these pilot studies, directions were laid for the next development of the project.

The following main points were identified in [25]:

1. Improving the embodiment by using solutions capable of mapping the movements of

Figure 12: Resulting scores of the actions performed by the users in the virtual world (F.Pittarello, 2017 [25])

(30)

Figure 13: Embodiment results (F.Pittarello, 2017 [25])

Figure 14: Actions in the virtual world associated to specific gestures (F.Pittarello, 2017 [25])

the real legs to the motion in the virtual reality;

2. Experimenting more advanced version of libraries to better track hand gestures; 3. Introducing perceivable physical objects in the virtual reality experience (haptic

feed-back);

At the end of these pilot studies, the intention was to evaluate the effect of these proposals on the engagement, embodiment and presence.

For further information please refer to the papers mentioned above.

3.2 Questionnaire to Actors

To have a better understanding of the most important factors related to the learning of a part for a play, a survey addressed to both professional and amateur actors was designed.

(31)

Figure 16: Actors’ professions

The questionnaire was composed of open and closed questions. In the first part we acquired vital statistics about the volunteers that answered the survey; then, we asked specific questions on the importance of the following factors during the learning of a part for a comedy:

• Rehearsing in the theater;

• Rehearsing with the proper scenographies;

• Having the possibility to interact or to manipulate props; • Having hints on the cues;

• Observing the cues’ timings;

• Knowing when to enter the stage and which are the movements and positions to keep on the stage;

• Rehearsing with other actors; • Rehearsing with stage costumes;

• Having the proper lightning during the scene; • Rehearsing with an audience.

Finally, they were asked how much they knew about virtual reality (we also provided two brief instructional videos about VR) and how much, in their opinion, the factors men-tioned above could benefit from the use of a virtual reality application.

The whole questionnaire is available at the end of this elaborate (Appendix A: Question-naire to Actors), in the language it was distributed to the volunteers(Italian).

(32)

Eleven actors answered to the questionnaire (8 women and 3 men). The main age of the group ranged between 40 and 50. The following pie chart 16 shows the professions of the eleven actors that answered our survey.

3.3 Results

The volunteers were asked about their previous theatrical experience in terms of genres, with more than one possible answer. Interestingly, all of them had experiences with comedy, which proved useful for this project (see Figure 17).

The results regarding their interest for theatrical genres (Figure 18) show similar results, with comedy resulting on top of the other genres by far.

The most important question to us, however, was the one that asked the importance of a series of factors when learning a part for a comedy. To evaluate the statements, we associated to them a five point Likert scale4, ranging from "Not useful all all" to "Very

useful". It’s possible to see from the results that Rehearse with other actors was deemed the most important factor by far (median score of 5.0), followed by Having the possibility to interact or to manipulate props, Observing the cue’ timings and Knowing when to enter the stage and which are the movements and positions to keep while on the stage.

Other factors that scored lower but still were deemed important to some actors were the ones related to the possibility to rehearse in a theater, with the proper scenographies and stage costumes. The results can be observed in Figure 19.

We then asked to the actors if their answer about the cues timings would have been the same if the question was asked for other theatrical genres (’Yes’/’No’ question) and nine actors out of eleven answered affirmatively. However, when asked if they would have answered differently about the presence and reactions of an audience if the questions regarded other genres genres, five of them explained in an open question that with genres such as drama, tragedy, opera, immersive performances and participative theater they would have given higher scores to the importance of the factors mentioned above.

Regarding the questions related on previous experience with virtual reality, eight of the

4The Likert scale is one of the two most common metrics to capture self-reported data in a user experience study, as

Albert and Tullis explain in their book Measuring the user experience [1].

The Likert scale consists in a five-point scale of agreement regarding a particular statement, which corresponds to an item of the administered questionnaire.

(33)

Figure 17: Actors’ theatrical experiences Figure 18: Actors’ interest for theatrical genres

interviewed had no knowledge about it, which explained the mild interest shown for virtual reality as an aid during the learning process of a theatrical part.

Still, in the answer to another open question, when asked to elaborate on the aspects that could be interesting in a VR application intended to help people learning a part for a comedy, many of the actors said that it could prove very useful during the rehearsals. In particular, they appreciated the possibility to train in a virtual theater with virtual scenographies.

All of them, however, cared to emphasize that, in their opinion, virtual reality should not be used in substitution of the play itself. They underlined the importance of interacting with a real audience instead of a virtual one, since plays, in particular the genre of comedy, are based on the public’s reactions.

(34)

3.4 Requisites

Some of the requisites for the application, such as the possibility to train in a theater with the proper scenographies and, in some way, even the possibility to train with other actors, were already part of the previous project.

The results of the pilot studies and the survey distributed to actors proved very useful to select the new features to be introduced to help the users learn a part for a comedy:

1. Allow the user to move freely and with his/her own legs in the play area; 2. Give haptic feedback through props;

3. Give assistance with the cues’ timings;

4. Give assistance with the positions and the movements on stage.

3.5 Application Design

This section presents the design of the various application components emerged from the requisites.

Starting from the requisites listed in the previous section, we selected a proper set of hardware components and we designed our application on the top of it.

In particular, in order to give a solution to the first requirement of the list (Allow the user to move freely in the play area), we decided to upgrade from Oculus Rift to HTC Vive, in order to map the movements of the user’s legs to the motion in the virtual reality and to allow the user to move freely in the virtual room.

The following sections will focus on how we tried to give a proper answer to to each of the other requirements.

3.5.1 Guidance System

The introduction of the guidance system represents the design solution to the require-ments 3 and 4.

The guidance system was designed following a iterative methodology, where each inter-mediate step was discussed with the thesis’ supervisor, leading to further refinements.

(35)

First Version The first sketches, shown in Figure 20 and Figure 21, are related to the initial proposal for helping the user maintaining the right position on stage.

As it can be seen from the first image (Figure 20), the initial ideas were: • An ’X’ mark on the floor;

• A circle on the floor; • A square on the floor;

• A vertical semi-transparent parallelepiped in which the user should stay within; • A ’T’ mark on the floor, used typically in film and televisions. Useful to give

addi-tional information on the specific positioning of the feet; • A ’ghost of the actor’.

In the second image (Figure 21) there are various sketches for the initial auditory aids: on Panel 1 and Panel 2 it is possible to see a slide bar under the user’s cue, with an indication of the passed time on the right, replaced by a stopwatch when the user is speaking overtime.

Panels 3a and 3b were two alternative proposals for giving the user a feedback about his/her timing’s performance of the just completed cue. In Panel 3a the idea was to show a ’tick’ (or a ’cross’) in the upper-left part of the screen, during the counterpart’s turn, whereas in Panel 3b the proposal was to momentarily interrupt the recitation to show the user a big ’tick’ in the center of the user’s vision line.

Panel 4 shows a first draft of the feedback to give to the user at the end of the scene.

Second Version Among the various 2D symbols proposed in the first sketches, it was decided to opt for the ’X’ mark on the floor as a positional aid, since it is probably the most used symbol in stagecraft [36] to mark the positions of actors and objects on the stage. We discarded the 3D solutions (the semi-transparent parallelepiped and the ’ghost of the actor’) because we deemed them ineffective since they would have obstructed the user’s line of vision.

Regarding the temporal aids, we kept the slide bar and the stopwatch due to their extreme clarity, but we dropped the textual indication of the passed time due to the extreme brevity of some cues.

(36)
(37)
(38)

We also discarded the idea of interrupting the recitation to show a ’tick’ or a ’cross’ on the whole screen, and to refine the proposal of showing a smaller icon during the other actor’s turn.

Finally, we decided to use the idea of showing the user a final textual summary, in order to let him/her know how his/her performance were on average. However, we also added a detailed textual feedback after the summary, to let the user check how exactly he/she performed during the each turn.

In these second set of sketches we considered also to make the system wait for the user for the completion of the cues, as part of Requirement 3. As it was explained in the section dedicated to the initial project, the previous application had predefined timings to complete the cue, which proved difficult for the users since the cues were very fast. With the guidance system however, the idea was to let the user complete the cue on his/her own time, and then to give feedback on the performance. Because of this, five cases of the user’s performance were identified, which will be explained in detail in Chapter 4, Section 4.2.3.

For now, it’s enough to say that, with respect to the expected time to complete the cue, we considered the following cases:

• The user completes the cue on time;

• The user completes the cue before the expected time; • The user completes the cue after the expected time;

• The user fails the cue’s timing because he/she starts speaking too late and tries to accelerate the speaking;

• The user fails to complete the cue because he/she speaks too little or nothing at all. Following these decisions, in Figure 22 it’s possible to see the two new proposals to show a feedback to the user during the recitation. As a matter of fact, with the introduction of the five performance cases, just showing a ’tick’ or a ’cross’ could not be enough for the user to understand what he/she did wrong.

The first proposal (Panel 2a) was then to replace the feedback icon with a text explaining the user why he/she failed to complete the cue on time, if that was the case.

(39)

on how to get better.

On Figure 23 it’s possible to see the introduction of additional visual aids to help improve the position and movements on stage. In particular, on Panel 3 it was proposed to move the feedback on the higher part of the screen and to use to lower space to give indication on the actions to carry out.

Final Design Choices This section describes the final design choices, derived from the it-erative design process, meeting the requirements identified from the results of the previous project’s pilot studies and from the questionnaire given to the actors.

Position and Movement Assistance As far as the position on the stage is concerned, initially it was chosen the ’X’ mark to help the user learn the correct position on stage. However, it did not help the users to direct their bodies.

Upon additional research, a very interesting article on Blocking, by Rosalind Flynn [6] came up. The definition of ’Blocking’ given in the article, is the following: "Blocking is the theater term for the actors’ movements on the stage during the performance of the play or the musical. Every move that an actor makes - walking across the stage, climbing some stairs, sitting in a chair, falling to the floor, getting down on bended knee – falls under the larger term "blocking"".

The author also explains that sometimes the blocking is determined by the director and other times it is provided directly from the playwright as notes in the text of the script. Either way, the actors have to memorize the blocking positions along with their lines. The article Blocking and Movement written by Erik Sean McGiven [20], found during further researches on blocking notations, describes not only the abbreviations used to refer to stage directions, but also explains the concept of Blocking Positions, which are defined as: "the positions of the characters relative to one another and to the audience". In the article it is shown an image (Figure 27) used for explaining the various types of blocking positions.

With respect to the other proposed symbols, the one shown in the picture had the ad-vantage of showing the required body direction so it was decided to adopt this icon to show the user the correct position to maintain on the stage, along with the right body direction.

(40)
(41)
(42)

Figure 24: Layout of the guidance system compo-nents during the user’s turn

Figure 25: Layout of the guidance system compo-nents during the other actor’s turn

However, since the positional aid could not help the user with the additional actions to be performed during the recitation, it was also established to keep the proposal of giving the user textual instructions about them; moreover, it was also included a textual feedback about the compliance with the right position during the user’s turn, along with an ap-propriate icon, and a final feedback, composed by a textual summary and a final detailed textual feedback on the user’s performance.

Cue’s Assistance For what concerns the support for the learning of the cues’ timings, it was decided to keep the proposals of the time bar and the stopwatch, because they are widgets well known by people that are used to interact with computer interfaces; also, they allow the user to quickly understand informations about the passed time.

We also kept the textual feedbacks after each cue, the final textual summary and the final detailed textual feedback for the reasons explained for the positional assistance, but we

(43)
(44)

added an icon associated to the cues’ feedbacks in order to let the user distinguish them from the feedbacks regarding the position and the feedbacks regarding the cue.

On another note, it was also decided to add the avatar of the character which is currently speaking at the side of the text of the cue, since it is a very common widget used in interfaces to identify the character that is speaking at the moment.

Final Design The final structure of the guidance system was therefore the following:

• Cue’s assistance: time bar, golden stopwatch, cue’s textual feedback, final textual summary, final detailed textual feedback;

• Position and movement assistance: positional icon on the floor, textual instructions for movements/directions, position’s textual feedback, final textual summary, final detailed textual feedback.

Refer to Figure 24, Figure 25 and Figure 26 to see how of these elements were composed in the user’s field of view.

In the following chapter (Chapter 4: Implementation) the implemented versions of these designs will be shown.

Regarding the feedback shown to the user during the recitation (see Figure 24, "cue’s feedback"), we chose different phrases for the six cases, each one written in a specific color.

Table 1 summarizes the mapping between cases and feedback given to users.

We also established a mapping between the amount of the time the user was in the right position and the suggestions given to the user, written in a specific color ((see Figure24, "position’s feedback"). Table 2 summarizes the mapping between cases and feedback given to the user.

With regard to the positions’ cases, the correctness amount of the user’s positions was checked by the system only during the user’s turn, that is, while the user was speaking. This was chosen to be coherent with the cue’s feedback, which was given after the user’s turn.

(45)

Case Feedback (Ital-ian) Feedback (En-glish) Associated Color The user completes the cue on time "Ottimo!" "Great!" Green The user completes the cue before the

expected time

"Parla più lenta-mente!"

"Speak slower!" Blue

The user completes the cue after the ex-pected time

"Parla più veloce-mente!"

"Speak faster!" Yellow

The user fails the cue’s timing because he/she started speaking too late

"Parla più veloce-mente e comincia prima!"

"Speak faster and start sooner!"

Orange

The user fails to complete the cue because he/she didn’t speak a lot or he/she didn’t speak at all in relation to the expected length of the cue.

"Hai dimenticato la battuta?"

"Have you forgot-ten the cue?"

Red

Table 1: "Timings cases - feedbacks - colors" mapping

3.5.2 Haptic Feedback

The introduction of props with haptic feedback represents the design solution to the second requirement ("Use props").

The initial proposals for the props to be introduced, based on the recording of a recitation of the play made for the national italian television some years ago, were:

• The hat of one of the characters; • A candelabrum;

• A cane; • A table; • A chair.

We discarded the idea of using the hat because the tracker necessary to map the object in the virtual world was a relatively big object difficult to associate to the real hat and to carry with it (see Figure 4 in Chapter 2.2.2).

The candelabrum was also left out because we could not figure out a way for it to become part of the recitation. The cane was an interesting idea but it presented the same problems of the hat, for what concerned the association with the trackers.

(46)

Case Feedback (Ital-ian) Feedback (En-glish) Associated Color The user is in the right position less

than 25% of the time.

"Devi stare nel cer-chio bianco!"

"You must stay within the white circle!"

Red

The user is in the right position between 25% and 50% of the time.

"Controlla meglio la posizione!”" "Check more carefully your position!" Orange

The user is in the right position between 50% and 75% of the time.

"Ci siamo quasi!" "You are almost there!"

Yellow

The user is in the right position more than 75% of the time.

"Benissimo!" "Very good!" Green

Table 2: "Positions cases - feedbacks - colors" mapping

Ultimately, we decided to opt for a table and a chair, since it was the best way to preserve the original scene as much as possible, and also because they were objects actively used during the recitation. Moreover, these items had the advantage to be everyday objects, and so the users would have found them more familiar. The other great benefit from choosing a chair and a table was that the users could ease the physical fatigue deriving from the virtual experience, which was mentioned by the participants of the pilot studies.

Final Design For the choice of the physical table and chair to use for our experiment, we took advantage of the pieces of antique furniture that the school where the study took place kindly offered to us, shown in Figure 28 and Figure 29.

(47)

3.6 Tools Used

To develop the application associated to this thesis’ project we used the hardware and software listed below.

3.6.1 Hardware

• MSI VR Ready computer, shaped as a backpack; • HTC Vive headset;

• Leap Motion device, mounted on the headset;

• HTC Vive Controllers as trackers for the physical objects.

3.6.2 Software

• Windows 10 Pro, the operating system of the computer; • SteamVR, to set up the Vive virtual room;

• Unity 3D5, version 5.6.0f3, to develop the application;

• Leap Motion Orion, to track the hands’ gestures;

• Leap Motion Hover-UI-Kit6, to handle the menu interface.

3.7 PlayVR - Complementary Project

The results of the pilot studies of the previous project showed the necessity of working also on the navigation paradigm.

The development of this project was carried out by the graduand Leonardo Zuffellato, and it is described in detail in the thesis "Design and comparison of navigation and interaction techniques for a virtual reality experience in the cultural heritage domain".

5Unity [34] it’s a game development multi-platform software, used to build high-quality 3D and 2D games, ranging from

mobile and desktop to AR/VR applications.

Unity supplies both a graphical interface and coding tools, allowing developers to choose between C# and Javascript as programming languages.

6Hover-UIKit "is a tool for creating beautiful, customizable, dynamic user interfaces. All interface interactions utilize

a simple and consistent mechanism – the "hover" – which users can perform with any 3D input device. These interfaces are designed specifically for VR/AR applications, addressing the complex UX challenges of these immersive environments"

(48)

4

Implementation

After giving a brief overview on some interesting aspects of our application, this section will explain the implementation details of the specific elements introduced as a result of the project design:

• The guidance system; • The haptic feedback.

4.1 Overview

As it was mentioned in the previous chapter (See Section 3.6.2), Unity, the software that we used to develop the application, provides a graphical interface with which the user can set up the various scenes composing the game.

A Unity scene contains the environment of the game, and it can be managed thanks to the Hierarchy Panel. In Figure 30 it is possible to see the composition of our application’s single scene.

The elements placed in a scene are called GameObjects, which are Unity’s base objects, and they can represent a variety of things, as described below, according to one’s needs. A GameObject is a container for Components, elements that implement the functionalities of an object. The basic component of each GameObject is the Transform component, which represent the position and the orientation of an object. Other components can be added from the editor’s Component menu or from a script.

(49)

Unity provides a series of pre-defined GameObjects, divided by category according to their functionality. As an example, we used 3D Objects, 2D Objects, Light, Audio and UI objects, along with Empty GameObjects.

To create new functionalities for an object, a developer needs to create a new Compo-nent. In Unity this is done by creating a script, written either in Javascript or in C#7. Specifically, we used C# as the programming language for the application.

Each Unity’ script connects with the internal workings of Unity by implementing a class which derives from the built-in class called MonoBehaviour, which allows the developer to use a series of methods to handle the events happening in the game.

During the development of the application, we used the following Mono functions: • Start, which is called once by Unity before gameplay, and it is the considered an

ideal place to do any initialization;

• Awake, similar to Start, also used to initialize any variable or game state. It is called before Start and, because of that, many suggest using it for referencing other scripts or objects;

• OnEnable is like Start, but called each time the script is activated, which can happen, for example, when the developer enables a GameObject from script.

• Update, called after Awake and Start, handles the frame update for the GameObject to which the script is attached. Update is useful to catch events relative to the object; • OnTriggerEnter, OnTriggerStay, OnTriggerexit are methods similar to Update, but with the specific purpose of detecting collisions between the script’s object and other GameObjects. The first method is called once for each incoming collision, the second is called on each frame in which the other object is colliding with the script’s one, and the third detects the end of a collision. In order to work, the GameObject to which the script component is applied, needs to have also a Collider component attached. The parameters of a Collider can be set up according the developer’s needs, but the parameter "Is Trigger" needs to be set to ’True’ to let the above-mentioned functions detect the collisions.

Another interface element that we used during the development, was the Animation panel,

(50)

Figure 31: Animation panel after the creation of the animations for the curtains

which allows the developer to create and modify Animation Clips directly from inside Unity. We took advantage of this tool to create a simple ’open-close’ animation for the theater’s curtains.

It is worth noticing that in order to make a virtual reality application for HTC Vive, the first thing that we had to do was adding a specific SteamVR asset to the scene. Then, we set up the GameObject representing the player, which we called "TheatrePlayer" (see Figure 32); this object contains in turn "LMHeadMountedRig" which is composed, among other things, by several specific VR components. Among them, for example, there is a Leap Motion script that allows the application to take as input the user’s hands detected by the Leap Motion controller.

Going further down the hierarchy, "LMHeadMountedRig" contains the "CenterEyeAn-chor" object, to which is attached the SteamVR camera. This component is what effec-tively renders what the user sees and hears during the experience.

"LMHeadMountedRig" also contains the virtual hands models to which the user’s hands’s movements are mapped, and the two HTC Vive controllers, used in this instance as track-ers for the physical objects.

We can see from Figure 32 that "CenterEyeAnchor" contains all the visual elements that we used for the guidance system ("CanvasHaptic" and "CanvasNoHaptic") and for the interface ("Hoverpanel").

It is worth mentioning how the user interface that we built with the HoverKit-UI is composed, and what its functionalities are. Its main elements are the Hovercast menu and the Hoverpanels.

The Hovercast menu, as shown in Figure 33, appears on the user’s left hand, and the user can interact with it by using the index finger of the right hand.

(51)

Figure 32: Composition of the object representing the user in the virtual environment

The five elements of the menu for the PlayVR experience are the following8:

• Teleport : by selecting this, it appears another menu, showing the user a series of theater’s areas to which he/she could teleport to;

• Experience 1 : leading to Hoverpanels specific for Experience 19;

• Experience 2 : leading to Hoverpanels specific for Experience 2; • Experience 3 : leading to Hoverpanels specific for Experience 3;

• Switch, a solution to invert the virtual models of the physical chair and the physi-cal table. A detailed explanation of this functionality can be found in the section dedicated to the haptic feedback 4.3.

The Hoverpanels are panels that the user views on his/her vision line (Figure 34. There are three panels for each Hovercast button related to an experience.

The three panels shown to the user are labeled, in order: Attend to the recitation, Set up the scenery for [Character Name]!, Perform on stage [Character Name]!10.

Other Hoverpanels were used to show the user the final textual summary and final detailed textual feedback. Since these elements are part of the guidance system, they will be discussed in the next section.

4.2 Guidance System

In the following sections we will be describe first the visual elements of the guidance system, then the composition of the recitation environment and, finally, the script that

8The labels shown in the pictures are in Italian, since the application was tested with Italian high-school students. 9These experiences will be explained in detail in Chapter 5, Section 5.2

(52)

Figure 33: Hovercast as seen by the user Figure 34: Hoverpanels as seen by the user

handles the recitation experience.

4.2.1 Visual Interface

As described previously, the main elements of the guidance system resulting from the project design are the following:

• Cue’s assistance: time bar, golden stopwatch, cue’s textual feedback, final textual summary, final detailed textual feedback;

• Position and movement assistance: positional icon on the floor, textual instructions for movements/directions, position’s textual feedback, final textual summary, final detailed textual feedback.

Except for the final textual summary and the final detailed textual feedback, all the other elements were placed inside a Canvas GameObject contained in the "CenterEyeAnchor" object.

Actually, in the scene’s hierarchy there are two canvas, one for the experience with guid-ance system, and one for the experiences without. However, since the elements of this last canvas (the autocue and the actors’ avatars) are not strictly considered part of the

(53)

Figure 36: Panel with the instructions for the user Figure 37: Time bar and golden stopwatch

guidance system, and since they are contained in the guidance system’s canvas anyway, we will describe only the guidance system’s canvas implementation.

It is worth noticing that the guidance system was used only in the experience in which the user acts in the role of one of the characters, "Lunardo", so, when speaking about ’the turn of Lunardo’, we will be referring to the turn in which the user is acting (And vice versa for ’user’s turn’), and when speaking about the ’turn of Maurizio’, we will be referring to the part in which the user attends to the virtual actor’s performance and receives the feedback about his/her previous turn performance.

The guidance system’s Canvas GameObject had the following children, which acted as containers for specific elements:

• "CueAndBackgrounds", which contains the Text GameObject for the cues’, the in-structions and other text, the Image GameObject for the background of the cues, and the Image GameObject that works as the background for the feedback shown to the user during Maurizio’s turn and as the background of the time bar during the

(54)

Figure 39: Hovercast as seen by the user Figure 40: Hoverpanels as seen by the user

user’s turn;

• "TurnoMaurizio", which contains the Image GameObject representing Maurizio’s avatar (Figure 35 E), and the Text Gameobjects for the textual feedback shown to the user during the recitation;

• "TurnoLunardo", which contains the Image GameObject representing Lunardo’s avatar (Figure 35 F), the two Image GameObjects composing the time bar and the Image GameObject of the golden stopwatch (Figure 35 B);

• "IconsFeedback", which contains the two Image GameObjects showing the icons as-sociated to the textual feedback given to the user during the recitation (Figure 35 C and D).

The "CenterEyeAnchor" object contains also the object "Hoverpanel" which, in addition to the panels that compose the application’s interface, contains the single final textual summary panel and the several final detailed textual feedback panels.

All of these elements were then referenced and handled from the script which was respon-sible for the whole recitation’s experience management, explained in Section 4.2.3. The result of the implementation of the guidance system can be appreciated in Figures 36, 37, 38, 39, 40.

4.2.2 Scene Composition

The acting experiences took place on the stage of the theater. In particular, upon the user selection of the Set up the scenery for Lunardo! Hoverpanel, the GameObject related to the experience with the guidance system and the real objects ("04_Rusteghi05_Haptic")

Riferimenti

Documenti correlati

Oggi, considerato l’imprescindibile utilizzo delle nuove tecnologie in quasi tutti gli aspetti della vita, la fase dell’appren- dimento si sviluppa per l’intero vissuto e

6 High Energy Physics Division, Argonne National Laboratory, Argonne IL, United States of America 7 Department of Physics, University of Arizona, Tucson AZ, United States of America.

In vitro antibacterial efficacy of Vicryl plus suture (coated Polyglactin 910 with triclosan) using zone of inibition assays.. Objectives: This study evaluates the in

In this contribution, we take a more general perspective: our primary focus is on the role of educational robotics laboratories in the development of scientific research

Trott, Higgs Decay to Two Photons at One Loop in the Standard Model Effective Field Theory, Phys. Passarino, Low energy behaviour of standard

Both the “positive growth regulators” (such as auxins, cytokinins, gibberellins and brassinosteroids) and the so called “stress hormones” (abscisic acid, jasmonic