• Non ci sono risultati.

Design and development of a Unity Editor XR plugin to enhance the immersive authoring of an interactive virtual environment

N/A
N/A
Protected

Academic year: 2021

Condividi "Design and development of a Unity Editor XR plugin to enhance the immersive authoring of an interactive virtual environment"

Copied!
92
0
0

Testo completo

(1)

Dipartimento di Informatica Corso di Laurea in Informatica

Design and development of a Unity Editor XR

plugin to enhance the immersive authoring of

an interactive virtual environment

Relatori:

Prof. Marcello Carrozzino

Prof. Franco Tecchia

Candidato:

Andrea Mantovani

(2)

Contents

1 Introduction 5

1.1 Thesis Outline . . . 6

1.2 Virtual Reality . . . 7

1.3 Visualization Systems . . . 8

1.3.1 The CAVE system . . . 8

1.3.2 Head Mounted Displays . . . 9

1.4 Applications of Virtual Reality . . . 12

1.4.1 Military . . . 12

1.4.2 Healthcare . . . 13

1.4.3 Virtual Heritage . . . 14

2 Background And State of the Art 16 2.1 Virtual Reality Frameworks . . . 17

2.1.1 Academic VR Frameworks . . . 17 2.1.2 Commercial VR Frameworks . . . 19 2.2 Game Engine . . . 20 2.2.1 The XVR Framework . . . 22 2.2.2 Unreal Engine . . . 23 2.2.3 Unity Engine . . . 26

2.3 The Available VR Editors . . . 29

3 Architecture and Implementation 32 3.1 Software Design Overview . . . 32

(3)

3.1.1 Modules and Interface Binder . . . 33 3.1.2 Tools . . . 34 3.1.3 Menus . . . 35 3.1.4 Applications . . . 36 3.1.5 Navigator . . . 40 3.2 Implementation . . . 41

3.2.1 Brief Recap about the Unity Engine Elements . . . 42

3.2.2 Entry Point . . . 46 3.2.3 Dependency Injection . . . 49 3.2.4 Input Management . . . 62 3.2.5 Application Management . . . 73 4 Conclusion 80 4.1 Future Improvements . . . 81

4.1.1 A Concrete DSL for the Modules Binding . . . 81

4.1.2 Visual Scripting in Scene . . . 82

(4)

List of Figures

1.1 CAVE systems. . . 9

1.2 Representation of a VR headset functioning. . . 10

1.3 (Left) The Oculus Rift and its Motion Controllers. (Right) The HTC Vive and its Motion Controllers. . . 11

1.4 Microsoft HoloLens headset. . . 11

1.5 Different kind of realities. . . 12

1.6 VR for parachute simulation. . . 13

1.7 VR for telesurgery. . . 14

1.8 VR for cultural heritage. . . 14

2.1 Some of the most used game engines nowadays (https://medium.com/@muhsinking/ should-you-make-your-own-game-engine-f6b7e3f4b6f5). . . 21

3.1 Project Layered Architecture. . . 33

3.2 Object dependencies resolution phases . . . 34

3.3 The MVC (Model-View-Controller) patter representation applied to the tools’ design . . . 36

3.4 Application hierarchy representation with message sending. The fill arrows, starting from the Root, show the downward selection of the widgets inside the hierarchy, instead the dotted arrow shows the upward selection from a child widget to a parent one. . . 37

3.5 The context where an application is immersed allowing it to configure itself by the provided settings. . . 39

(5)

3.6 The complete input management representation to send to the application the commands to interact with the relative tool. Each phase is enumerated in as-cending order an they consist as follow: 1) The Navigator requires to the In-terface Binderto provide it the devices input; 2) The Interface Binder gives the Navigator reference to the suitable module that receives the raw input value; 3) The input module obtains a connection with the Navigator component that translates raw value in "messages"; 4) The module catches the user’s inputs and forwards them to the Navigator; 5) The input manager invokes the Messaging Systemfacilities to wrap the raw input values in a more semantic "message" (in this step the input values can be dropped if they are not relevant for the "mes-sage" meaning); 6) The Messaging System sends the agreed "mes"mes-sage" to the application; 7) The application could be emits events relative to the incoming "messages" which they are bound to tool’s functionalities. . . 42 3.7 The diagram summarises the ordering and repetition of the callaback during

the lifetime a MonoBehaviour. . . 45 3.8 Comparison between an tool and a workspace . . . 63

(6)

Chapter 1

Introduction

During the last year of the ’90s and the begin of the new millennium, in the sci-fi movies scenario a particular theme ruled constantly: the deep symbiosis between the human and the machine. It’s undeniable that the technology, nowadays, is pervasive in roughly all the human activities; may be not like as the movies though but in a more elegant (and less catastrophic in some cases) way the computers are always present, in different shapes but they are. Studying a little more the most relevant movies of that genre, two aspects principally of the technologies was taken more spaces: the artificial intelligence (AI) and the virtual reality (VR). Of these two really actual topics, what mainly has been involved in a great number of sectors and has covered both a consumer and an industrial audience, it’s for sure the virtual reality and the augmented reality. How these new forms of interaction with the machines are available today is little different respect to what on the 90’ expected (not too far anyway) but, as you will see later, it’s quietly mature to be used in real context.

Nevertheless what it is still left behind it’s the capacity to build complex virtual environ-ment inside the virtual reality (seems contradictory but it is not) for several issues tied both to hardware and software[24]. The most part of the actual saleable products in the market are really limit on what the user can do and therefore the relevance to find new ways to exploit the virtual reality grows up.

From there comes the idea to make a building system integrated in the VR so to allow skilled people, like designer, developer, artist, no to propose the same driven experience but to give their a blank canvas on which creating new things.

(7)

Building a VR editor completely from scratch would be lead in an useless amount of efforts, so finding a base platform to start the project was one of the first task to accomplished. All the solutions found and the consideration made are reported in the The Available VR Editors (2.3). Better specifying right now that the Unity Editor XR has been considered as the best match for the project constraints. Of course the decision made has influenced the plugin design and implementation but giving the same wanted results anyway. The final outcome is a plugin, called Iogurt, that once installed on Unity Editor XR, it allows to give the access over a new set of features but with still giving the access to the overall facilities of the underneath platform. Lastly, huge importance was given to the user interface design: speaking frankly, the tool success depending in large part how much its interface results well designed[31] and the ben-efits could be found both in term of time waster and financial cost. In view of this a good part of the project study was spent to learn what is the actual state of art regarding user interface (UI) and the user experience (UX). In particular, since the new issue coming from the technical limitation of the virtual reality headset hardware, what are the best practice and expedient on the design of interface for immersive environments.

1.1

Thesis Outline

This document has been realized to have a complete understood on how all the functions have been realized, the way the work and the main reasons behind the adopted choices. Every chapter is responsible for a specific area and their order is functional to always have all the necessary knowledges to handle with the chapter to read. Here is a preview of them:

1. Introduction: The current chapter introduces to the main thesis’ objectives and give the very low basis on which is the Virtual Reality.

2. Background And State of the Art: This chapter exposes all the necessary knowledges about the engine that will be used to have a full understanding of the following chapter. Furthermore, it introduces the State of the Art of the faced problems, i.e. a precise de-scription of those problem and a little excursus on how they have been handled by the researchers and other programmers.

(8)

3. Overall System Design: Here there is a first, high-level, description of the adopted solution and of the way they have been realized.

4. Project Implementation: All the low-level details are discussed here. This is the chap-ter that introduces the programming issues and then presents the way those problem have been solved showing the snippets of code that handle with them.

5. Conclusions: This is the last chapter, where all the final considerations about the pre-viously described work are done and the future improvements are presented more in depth. Even though not one of these improvements has already been integrated into the system, all of them have already been studied and has been presented a draft of the way they could have been implemented.

1.2

Virtual Reality

Virtual Reality[33] (VR) has seen an incredible growth in the latest years, bringing a whole new way of interacting and experiencing 3D worlds, or even the real world itself enhanced with virtual elements in the case of Augmented and Mixed Reality. With the advent of tech-nologies such as the Oculus Rift [8] and the HTC Vive [2], which can bring realistic VR expe-riences at a reasonable price, and a slightly different approach brought by Microsoft HoloLens [5] and Google Daydreaming [?], the interest on VR is constantly increasing.

Virtual Reality is an immersive medium which gives the feeling of being entirely trans-ported into a virtual three-dimensional world, and can provide a far more visceral experience than screen-based media. From the user’s point of view, the main properties of a Virtual Re-ality experience are presence, immersion and interaction:

• Presence: it is the mental feeling of being in a virtual space. It is the first level of magic for great VR experiences: the unmistakable feeling that you have been teleported somewhere new. Comfortable, sustained presence requires a combination of the proper VR hardware, the right content and an appropriate system. Presence feeling is strictly related to the user involvement.

(9)

• Immersion: it is the physical feeling of being in a virtual space, at a sensory level, by means of interfaces. It is related to the perception of the virtual world as actually existing. The perception is created by surrounding the user with images, sounds and other stimuli that provide an engrossing total environment.

• Interaction: it is the user capacity to modify the environment and to receive from it feedback to his actions. It is related to the realism of the simulation. The interaction can be direct or mediated. In the first case the user interacts directly with the VE (CAVE-like systems), in the second case the user interacts with the VE by means of an avatar either in first person (HMD systems) or third person (Monitor).

1.3

Visualization Systems

An immersive VR system provides real-time viewer-centered head-tracked perspective with a large angle of view, interactive control, and binocular stereo display. One consideration is the selection of a technological platform to use for the presentation of virtual environments. This section provides an overview of the two most common visualization systems used in immersive virtual reality applications: CAVE systems and systems based on Head Mounted Display. As stated in [50], HMDs offer many advantages in terms of cost and portability with respect to CAVE systems. On the other hand, CAVEs provide broader field-of-view, higher resolution and a more natural experience for the user.

1.3.1

The CAVE system

A Cave Automatic Virtual Environment (CAVE) [52] is a cubical, room-sized fully immersive visualization system. The acronym is also a reference to the allegory of the Cave in Plato’s Republic in which a philosopher contemplates perception, reality and illusion. The walls of the room are made up of rear-projection screens on which high-resolution projectors display images. The user goes inside of the system wearing shutter/polarizer glasses to allow for stereoscopic viewing.

(10)

Figure 1.1: CAVE systems.

possibility of being used by multiple users at the same time. Apart from glasses, users inside a CAVE do not need to wear heavy headgear (like in HMD systems).

The SSSA X-CAVE [52] is an immersive visualization system developed by PERCRO Lab-oratory of Scuola Superiore Sant’Anna [11]. It is a 4-meters 4-walls room in which each wall is divided into several tiles in order to ensure a good resolution. Front, right and left screens are subdivided into four back-projected tiles each, for a global resolution of about 2500x1500 pixels per screen, whilst the floor is subdivided into six front-projected tiles for a global reso-lution of about 2500x2250 pixels. The X-CAVE is managed by means of the XVR technology, exploiting its distributed rendering features on a cluster of five workstations. The system is also provided with 7 cameras for optical position and orientation tracking.

1.3.2

Head Mounted Displays

A Head Mounted Display (HMD) is a device worn by a user on his head which reflects the way human vision works by exploiting binocular and stereoscopic vision. By using one or two screens with high resolution and high Dots Per Inch (DPI) and showing a different point of view per eye, the headset gives the illusion of depth to the user. Furthermore, the headset’s

(11)

gyroscope and accelerometer keep track of the head movement, moving and tilting the virtual camera of the world accordingly to achieve a natural realistic feeling of the head movement.

Figure 1.2: Representation of a VR headset functioning.

Since the Oculus Rift presentation, many companies developed their own VR headset with more or less success, each one proposing innovative solutions to the other big issue for a full immersion: the interaction. As the user is directly transported into a virtual world, for a full immersion he should be able to interact with it as if it were the real world. While this still represents a big challenge in the field, some companies as HTC and Oculus proposed some effective solutions by tracking the user movement in a small room or delimited space.

The Motion Controllers supposedly substitute the hands of the user, allowing an interaction with the virtual world. Obviously, this still represents a big limitation, there is still no touch feedback, and the user’s hands and fingers movement are still a challenge to reproduce in the virtual world. Other solutions such as the ones developed at the PERCRO, aim to provide a more realistic feeling to the virtual environment by using haptic devices and body tracking to bring the user’s body into the virtual world.

While VR aims to bring the user in a virtual world, Augmented Reality (AR), aims to bring virtual elements directly into the real world. Both the real world and the virtual elements are still limited to a screen, be it a computer or a mobile screen. By using a camera, the idea is to "augment" the reality by introducing virtual objects. Only recently, cameras specialized in this field started to become more accessible to consumers and phones started to offer a better tracking system with new possible applications.

Finally, the latest frontier is the Mixed Reality, which as the name suggests, aims to mix virtual and augmented reality, bringing the user immersion to a new level. By using a VR

(12)

Figure 1.3: (Left) The Oculus Rift and its Motion Controllers. (Right) The HTC Vive and its Motion Controllers.

Figure 1.4: Microsoft HoloLens headset.

headset and a high resolution 3D camera, the user is directly transported into a virtual repre-sentation of the real world augmented with virtual objects. In the most recent times, Microsoft has made some big steps in this direction by showing its HoloLens, a VR headset loaded with a powerful 3D camera which constantly scans the real world. The camera creates an internal 3D representation of the world used to provide an extremely precise tracking system to position and visualize 3D elements in it. As the user head moves, the headset moves and rotates the objects accordingly, effectively giving to the user the illusion of the presence of virtual objects in the real world. While the whole process is hidden to the final user, it results in a precise, high quality blending of Virtual and Augmented Reality.

The virtual world in which the user is transported can be either a virtual representation of the real world or even a completely new world, realized in a way to convince the user of being

(13)

Figure 1.5: Different kind of realities.

part of it. As of today, there are not yet devices able to make use of all of the human five senses. In fact, usually virtual environments such as videogames or virtual training systems, can solely appeal to the human vision and hearing, reducing the user immersion in the virtual world. For this specific reason, the focus is usually on the vision and hearing senses, trying to make the virtual world as convincing as possible, and letting the user navigate and freely explore it. With specific devices, called haptic interfaces it can be possible to receive tactile feedback in response to actions performed in the virtual world. The PERCRO laboratory of the Scuola Superiore Sant’Anna of Pisa (where this thesis has been developed) works on the development of innovative simulation systems, with the help of haptic interfaces and VR systems.

1.4

Applications of Virtual Reality

Even if the most widely adopted applications for Virtual Reality are the gaming and the en-tertainment fields, Virtual Reality is not an end in itself and in the literature are present many other kind of possible applications, some of which are more challenging or unusual than oth-ers.

1.4.1

Military

Virtual Reality is adopted by the military [41] for training purposes. This is particularly use-ful for training soldiers by simulating hazardous situations or other dangerous settings where they have to learn how to react in an appropriate manner. The uses of VR in this field include flight or battlefield simulation, medic training [48] and virtual boot camp. It has proven to be safer and less costly than traditional training methods.

(14)

For example Virtual Reality is adopted by the military for training purposes. This is par-ticularly useful for training soldiers and to simulate hazardous situations or other dangerous settings where they have to learn how to react in an appropriate manner. The uses of VR in this field include flight or battlefield simulation medic training and virtual boot camp. It has proven to be safer and less costly than traditional training methods.

Figure 1.6: VR for parachute simulation.

1.4.2

Healthcare

Healthcareis one of the biggest adopters of virtual reality which encompasses surgery sim-ulation [37], phobia treatment [58] and skills training [30]. A popular use of this technology is in robotic surgery [44]. This is where surgery is performed by means of a robotic device controlled by a human surgeon, which reduces time and risk of complications. Virtual reality has also been used in the field of remote telesurgery [49] where the operation is performed by the surgeon at a separate location to the patient. Since the surgeon needs to be able to gauge the amount of pressure to use when performing a delicate procedure, one of the key feature of this system is the force feedback [29].

Another technology deeply used in healthcare is Augmented Reality that enables to project computer generated images onto the part of the body to be treated or to combine them with scanned real time images [60].

(15)

Figure 1.7: VR for telesurgery.

1.4.3

Virtual Heritage

A new trend in Virtual Reality is certainly the use of such technology in the field of cultural heritage. Virtual Heritage [21] is one of the computer-based interactive technologies in vir-tual reality where it creates visual representations of monuments, artefacts, buildings and cultures, in order to deliver them openly to global audiences [45].

Figure 1.8: VR for cultural heritage.

The aim of virtual heritage [42] is to restore ancient cultures as a virtual environment in which the user can immerse himself into in such a way that he can learn about the culture by

(16)

interacting with the environment.

Currently, virtual heritage has become increasingly important in the preservation, protection, and collection of cultural and natural history [22]. The world’s historical resources of many countries are being lost and destroyed [35]. With the establishment of new technologies, Vir-tual Reality can be used as a solution for solving problematic issues concerning cultural her-itage assets.

(17)

Chapter 2

Background And State of the Art

This chapters serves as introduction to all the preliminary concepts needed to fully under-stand the whole system. It starts giving an overview of all the frameworks used to realize VR application, then it gives particular attention to the engine that are up to now the most used systems to realize the vast majority of big VR application, i.e. Unity and Unreal.

Furthermore, those problem that have been encountered during the development of the thesis are introduced together with the way they have been studied to gain the maximum advantages from the already present solutions. Here a little consideration is needed; in fact, those solution are too often very wonderful and effective in theory but poor when implemented to be used on real problem. This is mainly due to the fact that most of the paper lacks implementation details and too often remains in high-level description. Furthermore, it is very difficult to find concrete and simple example of this solutions and try to do it from scratch always ends in a big mess that makes you lost a lot of time.

Another thing to keep into consideration is the fact that during the development of those plugins not a single but multiple hard problems have been faced. So, to stay too much time on just one of them would have result in an pluriannual project, that was not exactly what we want. Unfortunately, this have brought to adopt some simpler solutions on problems that definitively need more attention, but this would for sure be matter of the future developments. For sure is pretty hard to realize World of Warcraft if you have never realized neither Pong.

(18)

2.1

Virtual Reality Frameworks

Commercial virtual reality authoring systems allow users to build custom VR applications some examples are WorldViz [20], EON Studio [1], COVISE [46] or AVANGO [56]. However, these systems are not open source and only allow the programmer to implement specific ap-plications which often do not offer advanced support to other technologies or case scenarios. On the other hand, academic VR frameworks are designed to be rather limited in scope, which mostly abstract from PC clusters to drive a multi-screen display device and support a variety of tracking systems such as Mechdyne’s CAVElib [4] and Carolina Cruz-Neira’s VR Juggler [25]. Nevertheless, they leave higher level tasks to the programmer, such as navigation or menu widgets. Some existing open source VR systems are just as extensible as CalVR and can be used for many tasks interchangeably with it. Examples are Oliver Kreylos’ VRUI [39] Bill Sherman’s FreeVR [51] or Fraunhofer IAO’s Lightning [26]. CalVR integrates OpenScene-Graph [9] (OSG) for its graphical output. OpenSceneOpenScene-Graph was modeled after SGI Performer [47] and is syntactically very similar, but it has evolved into its own programming library over the years. However, OpenSceneGraph lacks cluster-ability, tracking support, a way to describe multi-display layouts, menu widget and interaction system. Thus, OpenSceneGraph cannot drive alone VR systems.

In the latest years, many companies developed their own VR framework. Among all, one of the most common SDKs used in game engines is OpenVR [57], along its software support, SteamVR [13].

The following sections give an overview of the previously listed frameworks and their pecu-liarities.

2.1.1

Academic VR Frameworks

CAVELib[4] provides the cornerstone for creating robust, interactive and three-dimensional environments. CAVELib simplifies programming and it is considered as the most widely used Application Programmer Interface (API) for developing immersive displays. Developers focus on the application while CAVELib handles all the CAVE software details, such as the oper-ating and display systems and other programming components keeping the system platform

(19)

independent.

VR Juggler [25] is a cross-platform virtual reality application development framework maintained by IOWA State University. Their motto "Code once, run everywhere" sums up their goal to simplify common tasks in VR applications.

The Vrui VR[39] toolkit aims to support fully scalable and portable applications that run on a range of VR environments starting from a laptop with a touchpad, over desktop environ-ments with special input devices such as space balls, to full-blown immersive VR environenviron-ments ranging from a single-screen workbench to a multi-screen tiled display wall or CAVE.

FreeVR[51] is an open-source virtual reality interface/integration library. It has been de-signed to work with a wide variety of input and output hardware, with many device interfaces already implemented. One of the design goals was for FreeVR applications to be easily run in existing virtual reality facilities, as well as newly established VR systems. The other major design goal is to make it easier for VR applications to be shared among active VR research sites using different hardware from each other.

OpenSceneGraph[9] is an open source high performance 3D graphics toolkit. It is used by application developers in fields of visual simulation, games, virtual reality, scientific visu-alization and modelling. It is written entirely in Standard C++ and OpenGL and it runs on all Windows platforms, OSX, GNU/Linux, IRIX, Solaris, HP-Ux, AIX and FreeBSD operating systems. The OpenSceneGraph is established as the world leading scene graph technology, used widely in the vis-sim, space, scientific, oil-gas, games and virtual reality industries.

SGI Performeris a powerful and comprehensive programming interface for developers creating real-time visual simulation and other professional performance-oriented 3D graphics application.

CalVRcombines features from multiple existing VR frameworks into an open-source sys-tem. It is a new virtual reality middleware system which was developed from the ground up. In addition, CalVR implements the core functionality of commonly used existing virtual reality middleware, such as CAVElib, VRUI, FreeVR, VR Juggler or COVISE. It adds to those that it supports several non-standard VR system configurations, multiple users and input de-vices, sound effects, and high level programming interfaces for interactive applications. CalVR consists of an object-oriented class hierarchy which is written in C++.

(20)

2.1.2

Commercial VR Frameworks

WorldViz [20] is a virtual reality software company that provides 3D interactive and im-mersive visualization and simulation solutions for universities, government institutions, and commercial organizations. WorldViz offers a full range of products and support including enterprise grade software, complete VR systems, custom solution design and application de-velopment. Vizard is one of the products offered by WorldViz and it is a virtual reality software toolkit for building, rendering, and deploying 3D visualization & simulation applications. It natively supports input and output devices including head-mounted displays, CAVEs, Power-walls, 3D TVs, motion capture systems, haptic technologies and gamepads. Vizard uses Python for scripting and OpenSceneGraph for rendering.

EON[1] Reality is a Virtual and Augmented Reality based knowledge transfer for industry and education. EON offers a wide range of true cross platform solutions enabling Augmented Reality, Virtual Reality, and interactive 3D applications to seamlessly work with over 30 plat-forms. One of its goals is to bring VR and AR in the everyday life, from mobile to commercial uses, pushing the research and development on holographic solutions and immersive systems. COVISE[46] stands for COllaborative VIsualization and Simulation Environment. It is an extendable distributed software environment to integrate simulations, post processing and visualization functionalities in a seamless manner. It was designed for collaborative working, letting engineers and scientists to spread on a network infrastructure. In COVISE an applica-tion is divided into several processing steps which are represented by COVISE modules. Each module is implemented as separate processes and it can be arbitrarily spread across different heterogeneous machine platforms. Major emphasis was put on the usage of high performance infrastructures such as parallel and vector computers and fast networks.

Avocado[56] is an object-oriented framework for the development of distributed and in-teractive VE applications. Data distribution is achieved by transparent replication of a shared scene graphamong the participating processes of a distributed application. Avocado focuses on high-end, real-time, virtual environments like CAVEs[52] and Workbenches, therefore, the development is based on SGI Performer.

(21)

OpenVR[?] by Valve Software 1is an API and runtime system that allows access to VR hardware from multiple vendors without requiring that applications have specific knowledge of the hardware they are targeting. Thanks to the OpenVR API is possible to create an ap-plication that interacts with Virtual Reality displays without relying on a specific hardware vendor’s SDK. It can be updated independently of the game to add support for new hardware or software updates. The API is implemented as a set of C++ interface classes full of pure virtual functions.

This allows to develop a single application with the possibility to release it on multiple VR platform without edit basically nothing but the user input, since to abstract different kind of controller is pretty hard and also useless.

Many VR applications and engines, such as the Unreal Engine and Unity Engine, simplify their integration of VR by using SteamVR [13], that offers an interface between the OpenVR library and the engine. SteamVR, as the name suggest, is realized by Valve Software and this ensures not only a great quality of the product but also stability and continuous release to-gether with new version of OpenVR. Thanks to those plugins is possible to use all the OpenVR functionalities as engine native components, that allows more simplicity in integrating the VR in own application. Furthermore, it offers ready-to-use scripts and object that can be just in-serted in the application to add the most common VR interaction like Teleport or UI pointer. This is indeed the most common choice for a single developer or for small group of people when developing a VR application and it has been obliviously chosen to realize the learning environment on which conducts the tests.

2.2

Game Engine

A Game Engine is an application that provides a suite of high-level developing tools for the realization of a game. It usually offers a GUI interface in which is possible to control all the functionalities offered by the engine together with a set of API to allow the programmers to define behaviour for the objects inside the game. Thanks to the layers of abstraction that they offer, is possible to stay focused just on the game itself, letting the engine takes care of all the

(22)

necessary more low-level instruments like shaders or physic engine.

A core aspect of game engines is their extensibility, thanks to that is possible to enrich their functionalities more and more through new plugins. Another very appreciated characteristic is the possibility to compile the same games on multiple platforms with minor changes that are still uncorrelated from the way the game interfaces with the hardware.

As one could image, a VR application is basically a first person game in which the user is the main character, but this time he is really surrounded by the environment and there is no 2D display to filter the point of view. In fact, engines offer a wide range of tools to improve VR ap-plications, from graphics and animation tools to audio management and obviously gameplay and core programming for the basic elements of the application. C++ is widely used among all the engines, and most of them implements one or more of the previously mentioned frame-works to provide a full immersive VR and AR experience, leaving to the programmer the only task of creating the world and the gameplay.

Now is clear why Unreal and Unity are extensively used for the creation of VR contents, what-ever they are games or normal application (also named "serious game"). Before presenting them is worth a little introduction on the XVR Framework, one of the first engine to support the development in VR and optimized for this kind of application.

Figure 2.1: Some of the most used game engines nowadays (https://medium.com/ @muhsinking/should-you-make-your-own-game-engine-f6b7e3f4b6f5).

(23)

2.2.1

The XVR Framework

XVR [53] is an Integrated Development Environment developed at the Scuola Superiore San’tAnna designed to create Virtual Reality applications. Using a modular architecture and a VR-oriented scripting language, namely S3D, XVR content can be embedded on a variety of container appli-cations. XVR supports a wide range of VR devices (such as trackers, 3D mice, motion capture devices, stereo projection systems and HMDs).

Due to its extensive usage in the VRMedia group, a large number of libraries have been de-veloped to support the mentioned devices along with many more dede-veloped at the PERCRO laboratory of the Scuola Superiore Sant’Anna. XVR evolved during the years to include its own engine, and while being a powerful and extensible graphic engine, its development has gradually slowed down.

XVR is actually divided into two main modules:

• The ActiveX Control module, which hosts the very basic components of the technology such as versioning check and plug-in interfaces.

• the XVR Virtual Machine (VM) module which contains the core of the technology such as 3D Graphics engine, the Multimedia engine and all the software modules managing the built-in XVR features.

The XVR-VM contains a set of bytecode instructions, a set of registers, a stack and an area for storing methods. The XVR Scripting Language (S3D) allows specifying the behaviour of the application, providing the basic language functionalities and the VR-related methods, available as functions or classes. The script is then compiled in a byte-code which is processed and executed by the XVR-VM.

In general, an XVR application can be represented as a main loop which integrates several loops, each one running at its own frequency, such as graphics, physics, networking, tracking, and even haptics, at least for the high-level control loop.

(24)

2.2.2

Unreal Engine

Unreal Engine [17] is a graphic engine developed by the Epic Games team and mainly used to develop games and 3D applications. Unreal offers a complete suite of development tools for both professional and amateur developers to deliver high-quality applications across PC, consoles, mobile, VR and AR. The engine has a strong and large C++ framework which can be used to develop any kind of Virtual Environment. It offers a large set of tools such as:

• A user-friendly, customizable and extensible editor with a built-in real-time preview of the application.

• Robust online multiplayer framework. • Visual effects and particle system. • Material Editor.

• Animation toolset. • Full editor in VR.

• Built and thought for VR, AR and XR. • A large number of free assets.

• Audio engine.

• A Marketplace where assets and plugins can be shared and downloaded.

The engine is completely open source and this is its bigger quality. In fact, thanks to this, is possible to inspect all its feature starting from the line of code implementing them. This brings two main benefits: one is the possibility to find the exact way the engines makes the things just looking inside the ton of line of code; the other is the that every programmer can edit a part of the engine, rebuild the whole source and have a completely new features that exploits the native mechanism of the engine. The latter is especially useful when Unreal does not provide a specific low-level functionality; in this case it is possible to both edit existing class or to create a new one to directly insert into the engine. While this allows to exploit all the private or not

(25)

exposes method of the already existent class, a big drawback is that the whole engine must be rebuild any time a little modification is done, and it is valid also when trying to test the new inserted code. Furthermore, the fact that Unreal use the C++ as development language means that also during the coding of the game is necessary to rebuild the whole game every time, and while there are some optimizations that make the process faster, it is still a bit slow, especially if compared to the time used by Unity.

Unfortunately, while it is possible to read all the Unreal source code, its documentation is not so well done, and while all the class are documented, usually they lacks of important description and examples.

The basic building block of Unreal is a Module. The engine is implemented as a large collection of modules, where each of them adds new functionalities through a public interface. While the module is write using C++ code, they are linked through C# scripts compiled using UnrealBuildTool. Every time a specific module is added a set of new classes and functions is added to the API of the engine and it is then possible using all of them during the game developing. To add this module is necessary to edit a C# script. Some of them also needs further modification within configuration file to be correctly used.

Unreal works with a big main loop2that recalls a series of function within a precise order

in all the object present in the scene. Below are presented some basics concepts needed to be initialized with the engine:

• Project: A Project is a unity that holds all the content and code that make up an indi-vidual game.

• Objects: The base building blocks in the Unreal Engine are called Objects. Almost ev-erything in the Unreal Engine inherits (or gets some functionality) from an Object. In C++, UObject is the base class of all objects; it implements features such as garbage col-lections, metadata (UProperty) support for exposing variables to the Unreal Editor, and serialization for loading and saving.

• Classes: A Class defines the behaviors and properties of a particular entity or Object used in the creation of an Unreal Engine game. Classes are hierarchical as seen in many

(26)

programming languages and can be created in C++ code or in Blueprints.

• AActors: An Actor is any object that can be placed into a level. Actors are a generic Class that support 3D transformations such as translation, rotation, and scale. Actors can be created (spawned) and destroyed through gameplay code (C++ or Blueprints). In C++, AActor is the base class of all Actors.

• Components: A Component is a piece of functionality that can be added to an Actor. Components cannot exist by themselves, however they can be added to an Actor, giving it access to the Component functionalities. They may be seen as modules which any entity may implement.

• APawn: A subclass of Actor that serves as an in-game avatar or persona, for example the characters in a game, is a subclass of APawn. Pawns can be controlled by a player or by the game’s AI, in the form of non-player characters (NPCs).

• ACharacter: ACharacter is a subclass of APawn that is intended to be used as a player character. The ACharacter subclass includes a collision setup, input bindings for bipedal movement, and additional code for movement controlled by the player.

• Level: A Level is a user defined area of gameplay. Levels are created, viewed, and modi-fied mainly by placing, transforming, and editing the properties of the Actors it contains. • World: A World contains a list of Levels that are loaded. It handles the list of Levels and

the spawning/creation of dynamic Actors (created at runtime).

• AGameMode: A AGameMode object is an abstract entity that is responsible for manag-ing all the game aspects and rules. As an abstract entity it has no model nor physics and it mainly works through events that are very different from the one called on AActor. In a multiplayer game it exists only on the server machine for security reason.

• AGameInstance: The AGameInstance class contains information shared through all levels within a world and it is an abstract entity too. It’s the first thing created when a game session starts and the last one to be destroyed when the game ends. Conversely

(27)

to the AGameMode it is replicated in all the instance of an online session and serves to store information that clients can access without breaking the game.

The Unreal Engine offers a new, different way of defining the game core logic and its entities’ behavior. The Blueprints Visual Scripting system is a complete gameplay scripting system based on the concept of using a node-based interface to create gameplay elements from within Unreal Editor. It may be used to define object-oriented classes or objects in the engine. Objects defined by using the Blueprint scripting language are often referred as ’Blueprints’. The system itself is designed to be powerful and flexible, and is mainly aimed to designers and graphics with close to no C++ programming background (for this reason the Epic Team is pushing toward its development). It especially gives the possibility to define functions and objects which can be directly spawned as nodes in a Blueprint graph.

2.2.3

Unity Engine

Unity Engine born from a collaboration between Nicholas Francis, Joachim Ante and David Helgason to realize an Engine that also a programmer who cannot effort expensive license could have used3. It is a very easy to use engine due to its very well-made documentation

and very powerful GUI that allows to user to make basically everything but coding in a very intuitive manner. Despite the fact that it is written in C++, it uses a system of scripting in C#, this, while restricts the control the user have over the engine (and also give less space for more powerful optimization), it avoids very annoying crash due to the most bad monster ever created: the segmentation fault4. Although it is widely used mainly for the development of

mobile games, it supports about 30 different platforms.

In the current version, i.e. Unity 2019, it is entirely based on the Component design pattern, firstly introduced by the most famous quartet in the computer science’s word: the Gag of Four [32]. This pattern is one of the most used in the game development since it fits very well a lot of common problems like separation of domains (physics, graphic, AI, etc..) and shared behaviour [43]. So, in Unity everything (or almost everything) is just a component of

3More about the history of the engine can be found at [34].

(28)

an object. This means that if someone wants to affect the physics of an object he needs to firstly recover the components responsible for the physics, i.e. Transform, and the same goes for the components that handle the rendering of an object, i.e. Renderer.

Unity in strongly event driven too and it is based on a big cycle that continuously despatch event to all the scripts currently in the scene. The time on which those events are called oblivi-ously change based on their time, and while someone has a fixed timing, i.e. FixedUpdate(), other ones strictly depends on how fast the hardware is, i.e. Update().

To intercept this events Unity offers a series of C# interfaces that contains definitions of meth-ods called when the associated event happens. The easiest way to handle with the desired event is to create a class of the related interface and to implement the method that deals with that specific event. Is worth noting that, unlike Unreal, there is really no need to recall the superclass overridden method to make things work well.

To have a clear overview on the engine is necessary to introduce to a few basic concepts: • GameObject: In Unity the very base class is GameObject, that is the superclass of all the

object present in the scene. Unlike Unreal there is need to create any new class to spawn a new type of object. In fact, the way the behaviour of the object is define is not within its class type, but inside its components5.

• MonoBehaviour: The components can be defined generating a new C# scripts in which a new class is defined as subclass of MonoBehaviour. This exposes all the basic methods with which is possible to interact with the engine, like Start() and Update(). It is the core of the whole engine since the 95% of the code that build a Unity application is within its subclass.

• ScriptableObject: There are two common reasons if a script does not inherit from MonoBehaviour: its job is to handle with something less related to the game (like FileSys-tem or Storage); it does not define a behaviour, but it is responsible for storing some kind of data. The latter case is exactly what ScriptableObject was built for. In fact, it allows to create new object without being attached to any other GameObject. Conceptually

5Also in Unreal there are the concept of component, but while the theory behind them is always the same, in

(29)

it is like Unreal’s AGameModeBase, but its usage is completely different. For example, one of its common use is to make object inside the scene data-driven, avoiding to fill the script with tons of public attributes.

• Prefab: Up to this point Unity seems to offer no way to reuse the same gameobject with many behaviours attached to it. This implies that every time a new entity is spawn, all the wanted scripts need to be attached one at time. Obliviously there is a simpler solution. Unity, in fact, offers the possibility to create a Prefab that works exactly the same way classes and objects work in programming. Once a scene’s object becomes a Prefab is then possible to spawn multiple instance of it with just instantiating it. This allow the user to define a prototype of a given object and then creates how many copies of it he wants to.

• Transform: All the object, even the one used just as father to group some others, have at least one component that is strictly necessary to let object exists: Transform. It is the component that manages the three basic characteristics of everything inside a scene, i.e. its position, its rotation and its scaling. Every Transform directly refers to a father Transform that allows the position, the rotation and the scaling to be defined in a hi-erarchical way. For example, most of the time localPosition attribute is used instead of position since it has the 0, 0, 0 point not on the world axis’ origin but on the position of the father Transform.

Those five concepts are enough to get starting with Unity Engine, and while they cer-tainly not cover all the basis, they give enough knowledge to well understand what is written in chapter 3 and 3.2. Perhaps the only missing thing, though it is very intuitive, is how the plugins works in Unity, that is slightly easier than they do in Unreal.

Here a little clarification is needed: in Unity terminology what we intend to build is a package and not a plugin. In fact, the latter is a library of native code that adds new low-level func-tionalities to the engine itself and that can be included as shared library. Once added they are usable in all the applications. Furthermore, those plugin can be added only in the not-free versions of the engine. A package, instead, is a folder that contains new scripts and resources that can be used within the application in which they have been included. This means that

(30)

can be added nothing but stuff that engine allows to build (that are still enough for the vast majority of cases).

Anyway, to avoid further confusion and to do not make thing too specifically, the word plugin will be used to intend both Unreal plugin and Unity packages.

2.3

The Available VR Editors

Reinventing the wheel, or better translate it as the anti-patter "Not invented here" (NIH), was the first issue to resolve. Even with a readymade platform the project build encountered sev-eral complication, without it also the most basic operations (like displaying the scene on the headset) would have been a great challenge with the chance to make some features unafford-able. What following is an list of possible downloadable implementations of editors for virtual environments, where for each one will be described the functionalities, the pros and cons. By these analysis will arise the motivations behind the choice to adopt the Unity extension for its editor, also taking something from the section 2.2.3, but more important the list will give an overview of the current facilities expected by an editor working on VR. In fact looking just the editors for game engine is too poor, indeed there are a lot of different solutions for other type of topics, like: sculpting, drawing and mesh editing.

Tilt Brush[14] is a room-scale 3D-painting virtual-reality application available from Google, originally developed by Skillman Hackett. By the devices for virtual reality like HTC Vive or Oculus, the artists can immerse themselves in a virtual 3D space canvas to experiment some-thing it’s difficult to try in the reality, by really expensive and cumbersome system. Tilt Brush differs to other similar application by its complete set of palettes and brushes, but what make it really important is the possibility to use not just plain color but shaders, special light effects and particles to realize own painted world. Tilt Brush has been a case study for the actual project to its gaze-based interface, an user interface that follow the eyes movement [36], which granted to it to win a Proto Awards as Best GUI [40].

Oculus Medium[8] is an immersive VR experience that lets you sculpt, model, paint, and create tangible objects in a VR environment product by Oculus VR. Differently to Tilt Brush,

(31)

Medium is focused on creating 3D objects, not entire scenes, and it provides all the expected tools to shape own model. So all the operations like swirl, flatten, cut, inflate, are available and they can be performed through a very clever tool that allows really precise refinement. Medium proposes a simple 2D contextual menu in order to set the actual tool options by selecting them with a ray starting from the sculpting tool. The two more interesting facilities found in this product are the capacity to set the origin position scene, very useful also when scene editing is involved, and in-scene interactive guideline systems to support the designer to accomplish actions with more precision.

MARUI[3] is a plug-In for Autodesk Maya, and Blender as well, that allows you to do your modeling and animation work in VR with an Oculus Rift, HTC Vive, Windows MR, or Varjo VR-1 headset. MARUI is a complete tools suite providing a lot of really interesting feature that should be relevant in each field concerning the virtual reality. There are lot of interesting utilities but those capture the attention greatly are for sure: the ability to animate the meshes, creating and moving cameras independently from the user and the collaborative review (two people can share the same scene, one with the VR and one through the desktop workstation). The just mentioned features are those from which the development process aimed to achieve and that the future improvements could try to obtain.

Unreal Engine VR Mode[18] enables you to design and build worlds in a virtual reality environment using the full capabilities of the Unreal Editor toolset combined with interaction models designed specifically for Virtual Reality world building. This is the first example of game engine providing a VR version of its editor. In summary, the capacity offered to the user are developed in three aspects: the set of interaction abilities with the scene and its elements (translating, rotating, scaling); the controls for navigating the scene, selecting and manipulat-ing the actors (remember the UActor), and workmanipulat-ing with the editor panels and windows; the user interfaces for VR both with a classical windows-style menu that with gizmos in scene. There is also a mode for mesh editing but it’s slightly negligible respects to ad-hoc tools. The editors gives to the developer the basic functions like to place UActor on the environment or move it around but it lacks totally about to set a behaviour to the objects, a really necessary feature on a software designed to make interactive environment. On the user interface side,

(32)

the editor mixes traditional user interface for the most complex with a very elegant solution (like the radial menu) for the most requested actions.

Unity Editor XR [15] is the counterpart for Unity of the VR editor showed for Unreal. It was presented during the Unite6 2016 by Timoni West, director of XR in Unity Labs. The

tool has slightly the same functionalities of the Unreal Engine VR Mode (the ability to make transform operations there is), it transposes the base editor functionalities (inspector, console, scene hierarchy and project assets folder) in VR key. It introduces several utilities for the scene management (MiniWorld, scale factor), locomotion, objects gizmo in scene and really useful options regarding the object positioning (it’s possible select if it has to be aware to no make intersection with other objects, or setting the pivot at the feet of the object instead to its center and so forth). Like Unreal, also here there’s no way to add dynamic behaviour to the GameObject, making the system really limited. Up to now there is no advantages in a way to prefer the editor VR for Unity respect to that of Unreal, in fact Unity Lab strove to provide an open API documentation in order to allow future development around the product. For this reason, and the simplicity to program in Unity, the Editor XR was chosen as base to start the project.

All the other editors seen in the current section was taken in consideration for their fea-tures, like the MARUI case, but unfortunately just the Unity editor allow a real development over itself, resulting as the only almost possible choice.

(33)

Chapter 3

Architecture and Implementation

3.1

Software Design Overview

The project exposed in this document has been design to follow the Unity XR Framework structure, on which the project is based on. The framework provides a wide set of built-in facilities and it is designed to be easily extendable, without any requirement of change on the current code. The project actually shares the framework’s internal mechanism in order to use both the already made internal objects of the framework that, as will be explain later, add-ons. The framework peculiarities will be explained in the first part of the current section (how they are exploited and the connection points between their and the plugin developed for the project), instead the second part will be dedicated to the plugin integration and the discussion about the new developed features.

In summary, the abstract of the whole architecture follow the fig.3.1 in which each layer is built upon each other. Even if the architecture suggests a communication just between neigh-boring layers, the project implements a very practical dependency injection and messaging system (which they will be explained in detail over the Input Management (3.2.4) and the Nav-igator (3.1.5)sections respectively).

(34)

Figure 3.1: Project Layered Architecture.

3.1.1

Modules and Interface Binder

The Module layer is composed by a group of objects, named modules, which provides a common set of resources and services required by the project’s components. Starting from this set of modules, the Module layer provides a mechanism to bind whatever object requiring a particular resource to the specific module that it is able to satisfy it. Any module should be cover a really short range of the program topics, in order to encourage the proliferation of very tiny modules specialized in few arguments. So the code will result more clear and will be always know what modules is concerned a precise topic. From here born an inconvenient: it is not feasible getting the chance to an object to declare directly which set of modules are needed to satisfy its dependencies, and in fact, the Interface Binder is the adopted solution to have a uniform access interface to the set of heterogeneous modules but it hides to what the object are bound concretely. In practice, it works as registers, where each loaded module subscribes itself to the binder during program’s startup phase, and so the modules’ resources become available for the external components. Then, when an object requires the own dependencies satisfaction, the Interface Binder passes it through each module where they will scan the object to detect and inject the dependencies that they handle.

Actually, with the word dependency, it means not just collection of data but also func-tions and events. In this way, dynamic system update can be propagated without using any

(35)

Figure 3.2: Object dependencies resolution phases

references to concrete implementations, both toward the target objects and to the modules. Compared with the standard layered architecture, the dependency injection system works as cross-layer, making the modules’ services available in any levels of the project.

As further specification, the Module layer acts as system backbone and it is not intended to define precise user interactions or actions over the final product. Indeed, each module defines which are the basic features that the system provides (e.g. devices input tracking, object ini-tialization, physics interaction), which the upper levels will exploit to manipulate the virtual environment (see Tools (3.1.2))

In Implementation (3.2) some of the basic system functionalities will be covered: both those belong to the Unity Editor XR Framework and those created to build the plugin, with a detailed explanation about the dependency injection system and how the external objects access to it.

3.1.2

Tools

As explained in Input Management (3.2.4) the module supplies common services and resources but it is not supposed to have an interaction, in any manner, directly with the environment. Modules designed for specific edge cases would lead in an overwhelmed organization, without any recognisable benefit, since the number of entities involved it’s quite limited.

(36)

It consists in a core logic, where it defines and applies the tool’s functionalities, and a graphic user interface that spawns over the virtual environment.

The tool, in fact, is thought to interact with the environment and does not provide facil-ities for objects that are external to the tool itself. Indeed, the tool works with the human agent directly providing an graphic interface to invoke the tool’s functions. An other princi-pal difference respect to the module is the tool living time: a module is loaded during the Unity Editor XR’s start-up and it is unloaded when the editor is closed; instead any tool acts as a common program, namely, it is instantiated when the user requires to use it and it is closed by the relative action on the UI.

Since each created tool coincides with new plugin’s features, which they have an unique implementation, in Implementation (3.2) will be examined one by one; instead later on, this section will arguments the general design behind the tools.

As the fig.3.1.5 shows that the idea behind the development of a tool is to follow the well know design pattern Model-View-Controller (MVC), where the tool plays the role of the Con-troller. This involve that the tool has to manage the user interaction events attached to the Menu (the View) and to runs the related internal logic. The object target (the Model) is not to mean as a well defined object, but instead, it has been thought as the set of relevant in-formation for a tool which they can be retrieved by the dependency injection system. This because the Unity Editor XR framework wants to be really elastic over the application of the MVC, sounding more like a good practice to follow than a straight forward implementation of the pattern. In fact it’s might be possible merge the tool with the menu since any facilities available for one are for the other too (by the DI system), and all the three components use the same technology (Unity uses C# also for the UI).

3.1.3

Menus

The menu provides the View (following the architecture of fig.3.3) for the tool. In practice it translates the possible UI actions (e.g. pressing buttons, moving sliders, activating toggle, etc.) into more semantic events that will be bound to the tool’s actions.

For the sake of clarity, Unity has the own system integrated inside the editor to create graphic user interfaces. What the Menu does is just insert a minimal logic behind the user

(37)

Figure 3.3: The MVC (Model-View-Controller) patter representation applied to the tools’ design

interaction, over the interface, to link the human agent with the tool. No other constraints or specification are imposed to the menus, for example: how to build the interfaces, how they should be presented to the users or how to navigate through the available devices. This minimalism it’s a point of weakness for the framework that it did not allow to drive an inex-perienced developer. During the project development, a set of ready to use widgets have been created in order to simplify the user interface realization process.

The menus concludes the Unity Editor XR architecture overview. As told in the introduc-tion the same design has been reimported into the project’s plugin such that the new features and fixes can be removed back into the framework as native components with no further ef-forts.

The second part of the section will talk about the extensions made over the framework. In particular it will be explained what are the original functionality, that are missing in the native implementation, and in particular the improvements made over the building of tool’s interfaces.

3.1.4

Applications

The first new concept introduced by the framework extension it is the application. Although the Application layer (refer to fig.3.1) is built upon the Navigator layer, the new concept analysis starts from the applications because they fundamentally replace the purpose of the menus, seen in the previous section, and the navigator is strictly related to them.

The applications have been introduced to fill the lack of design in which the menus are suffering. In Menus (3.1.3) there was a brief note regarding the menus’ inefficiency to define

(38)

Figure 3.4: Application hierarchy representation with message sending. The fill arrows, start-ing from the Root, show the downward selection of the widgets inside the hierarchy, instead the dotted arrow shows the upward selection from a child widget to a parent one.

a correct way to implement the menus themselves. In this section, those weaknesses will be treated more in detail explaining how the applications heal them, achieving to the same goal of the menus.

The solution adopted has been the simplest one: the applications act as the menus in the same way at all but in a totally controlled context.

In practice, from the Unity Editor XR framework’s point of view the application it’s totally indistinguishable to the menu, sharing the same role of View about to the relation with the tool (fig.3.3). In fact it translates yet the user’s interaction with the GUI into more semantic events that then it exposes them to the tool in order to run the functionalities required by the user. The application handles also all the changes over the interfaces, coming both from the user directly and from the tool due an update over the model.

What make the applications different from the menu is the cooperation with the context abovementioned which is provided by the underline Navigator layer. By this an application can achieve over a set of useful capabilities which they allow a really much more comfortable and effectiveness development around all the interfaces aspects.

(39)

The first aspect, and in some sense the more important one, is the navigation over the user interface by the controller devices. The relationship between application’s navigation and the Navigatoris really strict but it will not be covered ahead of this section but in the next one, Nav-igator (3.1.5), since it there will be a much more clear background. Unity Editor XR provides just the canonical ray casting as interaction method that it is practically the mouse pointer transposition in a 3D world. Therefore the Menu layer was not worried about the other set of possible inputs provided by the devices (like touchpad, buttons) except to the ray casting, which moreover is Unity to handle it internally. At the end this led in an missing exploitation of an high level input manager by which providing an abstraction from the raw input and con-sequently be able to define inside the View logic, therefore in the menu, what happens when an input is executed. There is of course the chance to define a behaviour related to input but the value to manage are raw and device dependent (see Implementation (3.2) for an example). As possible to see in fig.3.1, beside the Application layer, there is Messaging System. The Messaging Systemis an Unity built-in technology which it simply send "message" to a target object that it will worry to handle it and perform some actions depending to the message type and content. Unity uses it in the own UI element to define the behaviour when external events are occured (like pointer clicking, submission and so on). The "message" and in particular the Messaging Systemcompose the perfect level of input abstraction, completely missing in the native Menu layers, which the applications are tied to receive the external events and run internal logic. By the "message" the application won’t have to worry about to manage raw inputs value but it will have just to declare which kind of "messages" it needs to handle. However in order to make the system working in a effective manner, it is necessary impose to the applications to supply two components: a navigation route and a starting point named root. What these two components provide is an unambiguous organization to determine which is the actually addressee for the "messages" sent through the Messaging System. The idea is organize the applicationas a tree where the widgets are the nodes and the navigation links are the edges, double linked so in any moment the tree can be traversed both downward and upward. The root is the item will be take as target when the editor spawns the application. Now it is pos-sible travel along the application’s set of widgets just sending message to the current selected widget saying if "going down" or "going up". Notice that the messages are totally unbound to

(40)

Figure 3.5: The context where an application is immersed allowing it to configure itself by the provided settings.

any concrete context and they could be generated by a touch device, a keyboard or software to simulate physical devices. Therefore the fact to bind the applications to "messages" achieve to be agnostic respect to the execution platform, and in fact, the applications development can be done regardless which it will be the device chosen by the agent and, as even better result, the developer can design and build the own application avoiding the installation of the fully Unity Editor XR framework since the Messaging System is a Unity basic service, totally untied to the VR editor.

The second aspect aims to enhance the configuration possibility over the applications cus-tomization. The goal is providing to the developer simple ways to define such aspects that they can the application’s window size, animations, color theme and other settings. For these reasons the Navigator layer provides to the applications a context where their setting could be red and, more important, they can be set by the applications themselves in case some of them have need for particular tweaks.

The infrastructure building around the applications makes them a more valid solution to the basic menus which they doesn’t give a really help to the developer. The evolution that the new system brings make the building of new tools easier and satisfying than the the classic way, where each aspects had to be take care by the developer since the menus actually is a too primitive and poor of functionalities.

(41)

3.1.5

Navigator

The Navigator layer is the last one that composes the plugin’s architecture (fig.3.1). As in the section Applications (3.1.4) has been explained, the Navigator is the system to provide to the applications all the infrastructures needed to accomplish their purposes.

The Navigator works as a windows manager for desktop application where it controls the placement and appearance of windows within a windowing system in a graphical user inter-face. In this case the windows are the applications’ interfaces that are drawn by the Unity engine.

More in detail, the Navigator builds the already mentioned context defining the setting that the system will apply to set the graphics interface of the application properly for then spawn it on the virtual environment. However the Navigator does not just concern about the life cycle for a single application but it has to orchestrate the whole applications pool which it consists to manage the applications inter-relation as the transition between two applications, resetting the context state to the initial one or possible shared caching system and so forth. Moreover having a programmable wrapper over the applications allow to insert an extra overlays on the GUI to display extra information or other elements, for example in the current project the overlay space is exploit to show tooltips, such as messages or input field, and menu section common to all the applications, like return back to the start menu or exit from the plugin.

Large discussed in Applications (3.1.4) is the input controller and the navigation system. To avoid platform dependent configuration the Messaging System has been introduced and the discussion about how the "messages" are sent has been left to this section.

Once that the application is spawned into the environment, the Navigator obtains the root and the navigation route components, it stores them and starts to select the widget referenced by the root. Then if a moving "message" occurs, it’s enough looking which is the current selected widget and, following the navigation route, determine which is the next one in order to select it. The procedure is quite simple and works also in case of unordered selection, like happens in case of the ray casting selection. Fig.3.6 shows the operations performed by the Navigator to manage the raw input, comes from the devices, where by applying some translation logic it drives the Messaging System to create and send the "message"s which they

Riferimenti

Documenti correlati

In our review, we identified five different topics of potential forensic interest, in which OCT may be a useful diagnostic tool: assessment of the heart and vascularization in the

PFS was defined as the time from the start of sorafenib therapy until disease progression or death from any cause Figure 4: Hypothetic activation of the renin-angiotensin system

Each time the user selects a door leading to a new artwork, the 3D client queries the ontology server to get information about the chosen artwork, including the list of artworks it

If on one hand the study of clinical pathways is now heading towards the active involvement of patients in decisions related to their own health issues [ 43 ], on the other hand

Inoltre, egli ha dato la protezione a tutte le opere di carità di Elisabetta e del resto, previa consultazione con il Papa Gregorio IX, fece conoscenza con Corrado di Magdeburgo, che

The pathogenicity test was carried out by spraying a conidial suspension of 10 5 conidia/ml from a single-spore culture of strain IT50 onto leaves of 30-day-old spinach

Al- though recent morphological and molecular anal- yses suggested that Odontaspididae might not be monophyletic (e.g., Shimada 2005; Martin et al. 2012), several authors