• Non ci sono risultati.

Design and development of a fully immersive training simulator for complex industrial maintenance operations

N/A
N/A
Protected

Academic year: 2021

Condividi "Design and development of a fully immersive training simulator for complex industrial maintenance operations"

Copied!
151
0
0

Testo completo

(1)

University Of Pisa

Department of Computer Science

Master’s Degree in Computer Science

Academic Year 2016/2017

Design and development of a fully

immersive training simulator for

complex industrial maintenance

operations

Supervisors Candidate

Prof. Franco Tecchia Francesco Desogus

Prof. Marcello Carrozzino

Reviewer

(2)
(3)

3

To my family, my loved one and my friends.

(4)
(5)

Acknowledgments

First, I would like to thank my family for the continuous support, even if I am far away from home. I couldn’t have done it without their help and encouragement over all these years, and for this I will be eternally grateful. I would also like to thank Nena, who has always believed, encouraged and supported me.

I want to express my gratitude to my supervisors, Dr. Franco Tecchia and Dr. Marcello Carrozzino of Scuola Superiore Sant’Anna for giving me the oppor-tunity to work in the PERCRO lab, for letting me use the newest technologies available for Virtual Reality and for all the help given to me whenever I was stuck.

I would also like to thank all of my dear friends, both from Pisa and from Cagliari, for all the good times spent together.

Finally, a special thanks to the Internet for being an infinite source of infor-mation. I would also like to curse the Internet for being an infinite source of information.

(6)
(7)

Abstract

This thesis covers the design and the development of an immersive Virtual Reality application that aims to provide an easier way to train operators for industrial maintenance operations. Training a specialized workforce requires huge amount of time, resources and logistic facilities. Virtual Reality intro-duces significant benefits in the training processes by removing the need of cumbersome and expensive physical mockups. At the same time it allows mul-tiple trainees to work simultaneously, whilst the use a physical machines would require trainees to operate one at a time.

Furthermore, the application allows to review at a human scale industrial CAD models in different scenarios with realistic rendering and with different kinds of lighting.

In this document is described how these aforementioned issues were addressed by exploiting HMDs such as the HTC Vive and the Oculus Rift, using the Unreal Engine to develop the Virtual Reality application.

(8)
(9)

Contents

1 Introduction 10

1.1 Visualization Systems . . . 11

1.1.1 The CAVE system . . . 11

1.1.2 Head Mounted Displays . . . 12

1.2 Applications of Virtual Reality . . . 15

1.2.1 Military . . . 15

1.2.2 Healthcare . . . 16

1.2.3 Virtual Heritage . . . 16

1.3 Objectives Of The Thesis . . . 18

1.4 Thesis Outline . . . 19

2 Background And Related Work 21 2.1 Virtual Reality Frameworks . . . 22

2.1.1 Academic VR Frameworks . . . 22

CAVELib . . . 22

VR Juggler . . . 23

VRUI . . . 23

FreeVR . . . 23

Open Scene Graph . . . 23

SGI Performer . . . 24 CalVR . . . 24 2.1.2 Commercial VR Frameworks . . . 24 WorldViz . . . 24 EON . . . 25 COVISE . . . 25 Avocado . . . 25 9

(10)

CONTENTS 10

OpenVR and SteamVR . . . 26

2.1.3 Commercial SDKs and Game Engines . . . 26

Google Daydream . . . 26

Unity3D Engine and VR . . . 26

Unreal Engine VR . . . 27

2.2 The XVR Framework . . . 27

2.2.1 The S3D scripting language . . . 28

2.3 The Unreal Engine . . . 30

2.3.1 Terminology . . . 30

2.3.2 The Blueprint Visual Scripting language . . . 32

2.3.3 C++ development . . . 34

2.3.4 Unreal Engine open source . . . 35

2.4 VR approaches for Immersive Design . . . 36

2.5 VR for Training . . . 37

3 Overall System Design 39 3.1 The XVR prototype . . . 40

3.1.1 Support for HMDs . . . 40

3.1.2 User movements . . . 40

3.1.3 User interaction: menu . . . 43

3.1.4 User interaction: object manipulation . . . 45

3.2 The Unreal Engine 4: feasibility study . . . 45

3.2.1 The Unreal Engine: an overview . . . 46

Relevant Gameplay Elements . . . 47

The performance profiler . . . 48

3.2.2 Runtime mesh: loading . . . 50

(11)

CONTENTS 11

3.2.4 Runtime mesh: performance . . . 55

3.3 The Unreal Engine application . . . 57

3.3.1 Support for the AAM mesh format . . . 57

3.3.2 VR integration . . . 57

3.3.3 CAD object manipulation . . . 58

3.3.4 Menu interaction . . . 60

3.3.5 Assembly panel: the Record tab . . . 63

Overview . . . 64

Recording session . . . 65

Grouping objects . . . 67

Sequences editing . . . 69

3.3.6 Assembly panel: the Playback tab . . . 70

3.3.7 Details panel . . . 72

The Details tab . . . 73

The Materials tab . . . 73

3.3.8 Utility panels and functionalities . . . 75

The Scene Selector panel . . . 75

The Settings panel . . . 76

Undo/Redo . . . 77

4 Project Implementation 79 4.1 C++ and Blueprint integration . . . 79

4.1.1 Unreal Engine reflection system . . . 79

4.1.2 Application workflow . . . 82

4.2 System overview . . . 82

4.2.1 The UserPawn class . . . 83

The CustomMotionController class . . . 83

Interaction with the virtual environment . . . 85

(12)

CONTENTS 12

The Outline material . . . 88

4.2.3 The AAMimporter class . . . 92

AAM mesh format . . . 92

Template material . . . 95

AAM parser . . . 96

4.2.4 The menu system . . . 98

UMG and C++ . . . 98

Panels in the virtual environment . . . 99

Movable panels . . . 101

4.2.5 The Assembly Mode . . . 101

The Assembly file format . . . 101

Recording an assembly file . . . 104

Training on an assembly file . . . 104

4.2.6 Utility features . . . 105

The Undo/Redo feature . . . 105

The Scene Selector feature . . . 106

The Animator class . . . 108

4.3 Preliminary user testing . . . 110

4.3.1 Description of the previous pilot study . . . 110

4.3.2 Description of the new pilot study . . . 113

4.3.3 Results . . . 114 5 Conclusions 117 5.1 Achievements . . . 117 5.2 Future improvements . . . 118 Glossary 121 A Appendix 124

List of Code Snippets 135

(13)

CONTENTS 13

(14)

1. Introduction

Virtual Reality [1] (VR) has seen an incredible growth in the latest years,

bringing a whole new way of interacting and experiencing 3D worlds, or even the real world itself enhanced with virtual elements in the case of Augmented and Mixed Reality. With the advent of technologies such as the Oculus Rift [2] and the HTC Vive [3], which can bring realistic VR experiences at a reasonable price, and a slightly different approach brought by Microsoft HoloLens [4] and Google Daydreaming [5], the interest on VR is constantly increasing.

Virtual Reality is an immersive medium which gives the feeling of being entirely transported into a virtual three-dimensional world, and can provide a far more visceral experience than screen-based media. From the user’s point of view, the main properties of a Virtual Reality experience are presence, immersion and interaction:

• Presence: it is the mental feeling of being in a virtual space. It is the first level of magic for great VR experiences: the unmistakable feeling that you have been teleported somewhere new. Comfortable, sustained presence requires a combination of the proper VR hardware, the right content and an appropriate system. Presence feeling is strictly related to the user involvement.

• Immersion: it is the physical feeling of being in a virtual space, at a sensory level, by means of interfaces. It is related to the perception of the virtual world as actually existing. The perception is created by surrounding the user with images, sounds and other stimuli that provide an engrossing total environment.

• Interaction: it is the user capacity to modify the environment and to receive from it feedback to his actions. It is related to the realism of the simulation. The interaction can be direct or mediated. In the first case the user interacts directly with the VE (CAVE-like systems), in the second case the user interacts with the VE by means of an avatar either in first person (HMD systems) or third person (Monitor).

(15)

1.1. VISUALIZATION SYSTEMS 15

1.1

Visualization Systems

An immersive VR system provides real-time viewer-centered head-tracked per-spective with a large angle of view, interactive control, and binocular stereo display. One consideration is the selection of a technological platform to use for the presentation of virtual environments.

This section provides an overview of the two most common visualization tems used in immersive virtual reality applications: CAVE systems and sys-tems based on Head Mounted Display. As stated in [6], HMDs offer many advantages in terms of cost and portability with respect to CAVE systems. On the other hand, CAVEs provide broader field-of-view, higher resolution and a more natural experience for the user.

1.1.1

The CAVE system

A Cave Automatic Virtual Environment (CAVE) is a cubical, room-sized fully immersive visualization system. The acronym is also a reference to the alle-gory of the Cave in Plato’s Republic [7] in which a philosopher contemplates perception, reality and illusion. The walls of the room are made up of rear-projection screens on which high-resolution projectors display images. The user goes inside of the system wearing shutter/polarizer glasses to allow for stereoscopic viewing.

CAVE systems provide complete sense of presence in the virtual environment and the possibility of being used by multiple users at the same time. Apart from glasses, users inside a CAVE do not need to wear heavy headgear (like in HMD systems).

The SSSA X-CAVE [8] is an immersive visualization system developed by PERCRO Laboratory of Scuola Superiore Sant’Anna [9]. It is a meters 4-walls room in which each wall is divided into several tiles in order to ensure a good resolution. Front, right and left screens are subdivided into four back-projected tiles each, for a global resolution of about 2500x1500 pixels per screen, whilst the floor is subdivided into six front-projected tiles for a global resolution of about 2500x2250 pixels. The X-CAVE is managed by means of the XVR technology, exploiting its distributed rendering features on a cluster

(16)

1.1. VISUALIZATION SYSTEMS 16

Figure 1.1: CAVE systems

of five workstations. The system is also provided with 7 cameras for optical position and orientation tracking.

1.1.2

Head Mounted Displays

A Head Mounted Display (HMD) is a device worn by a user on his head which reflects the way human vision works by exploiting binocular and stereoscopic vision. By using one or two screens with high resolution and high Dots Per Inch (DPI) and showing a different point of view per eye, the headset gives the illusion of depth to the user. Furthermore, the headset’s gyroscope and accelerometer keep track of the head movement, moving and tilting the virtual camera of the world accordingly to achieve a natural realistic feeling of the head movement.

Since the Oculus Rift presentation, many companies developed their own VR headset with more or less success, each one proposing innovative solutions to the other big issue for a full immersion: the interaction. As the user is directly transported into a virtual world, for a full immersion he should be able to interact with it as if it were the real world. While this still represents a big challenge in the field, some companies as HTC and Oculus proposed some

(17)

1.1. VISUALIZATION SYSTEMS 17

Figure 1.2: Representation of a VR headset functioning

effective solutions by tracking the user movement in a small room or delimited space.

The Motion Controllers supposedly substitute the hands of the user, allowing an interaction with the virtual world. Obviously, this still represents a big limitation, there is still no touch feedback, and the user’s hands and fingers movement are still a challenge to reproduce in the virtual world. Other so-lutions such as the ones developed at the PERCRO, aim to provide a more realistic feeling to the virtual environment by using haptic devices and body tracking to bring the user’s body into the virtual world.

Figure 1.3: (Left) The Oculus Rift and its Motion Controllers. (Right) The HTC Vive and its Motion Controllers

While VR aims to bring the user in a virtual world, Augmented Reality (AR), aims to bring virtual elements directly into the real world. Both the real world and the virtual elements are still limited to a screen, be it a computer or a mobile screen. By using a camera, the idea is to ”augment” the reality by introducing virtual objects. Only recently, cameras specialized in this field started to become more accessible to consumers and phones started to offer a

(18)

1.1. VISUALIZATION SYSTEMS 18

better tracking system with new possible applications.

Figure 1.4: Microsoft HoloLens headset

Finally, the latest frontier is the Mixed Reality, which as the name suggests, aims to mix virtual and augmented reality, bringing the user immersion to a new level. By using a VR headset and a high resolution 3D camera, the user is directly transported into a virtual representation of the real world augmented with virtual objects. In the most recent times, Microsoft has made some big steps in this direction by showing its HoloLens, a VR headset loaded with a powerful 3D camera which constantly scans the real world. The camera creates an internal 3D representation of the world used to provide an extremely precise tracking system to position and visualize 3D elements in it. As the user head moves, the headset moves and rotates the objects accordingly, effectively giving to the user the illusion of the presence of virtual objects in the real world. While the whole process is hidden to the final user, it results in a precise, high quality blending of Virtual and Augmented Reality.

(19)

1.2. APPLICATIONS OF VIRTUAL REALITY 19

The virtual world in which the user is transported can be either a virtual representation of the real world or even a completely new world, realized in a way to convince the user of being part of it. As of today, there are not yet devices able to make use of all of the human five senses. In fact, usually virtual environments such as videogames or virtual training systems, can solely appeal to the human vision and hearing, reducing the user immersion in the virtual world. For this specific reason, the focus is usually on the vision and hearing senses, trying to make the virtual world as convincing as possible, and letting the user navigate and freely explore it. With specific devices, called haptic interfaces it can be possible to receive tactile feedback in response to actions performed in the virtual world. The PERCRO laboratory of the Scuola Superiore Sant’Anna of Pisa (where this thesis has been developed) works on the development of innovative simulation systems, with the help of haptic interfaces and VR systems.

1.2

Applications of Virtual Reality

Even if the most widely adopted applications for Virtual Reality are the gaming and the entertainment fields, Virtual Reality is not an end in itself and in the literature are present many other kind of possible applications, some of which are more challenging or unusual than others.

1.2.1

Military

Virtual Reality is adopted by the military [11] for training purposes. This is particularly useful for training soldiers by simulating hazardous situations or other dangerous settings where they have to learn how to react in an ap-propriate manner. The uses of VR in this field include flight or battlefield simulation, medic training [12] and virtual boot camp. It has proven to be safer and less costly than traditional training methods.

(20)

1.2. APPLICATIONS OF VIRTUAL REALITY 20

Figure 1.6: VR for parachute simulation

1.2.2

Healthcare

Healthcare is one of the biggest adopters of virtual reality which encompasses

surgery simulation [13], phobia treatment [14] and skills training [15]. A pop-ular use of this technology is in robotic surgery [16]. This is where surgery is performed by means of a robotic device controlled by a human surgeon, which reduces time and risk of complications. Virtual reality has also been used in the field of remote telesurgery [17] where the operation is performed by the surgeon at a separate location to the patient. Since the surgeon needs to be able to gauge the amount of pressure to use when performing a delicate procedure, one of the key feature of this system is the force feedback [18]. Another technology deeply used in healthcare is Augmented Reality that en-ables to project computer generated images onto the part of the body to be treated or to combine them with scanned real time images [19].

1.2.3

Virtual Heritage

A new trend in Virtual Reality is certainly the use of such technology in the field of cultural heritage. Virtual Heritage [20] is one of the computer-based interactive technologies in virtual reality where it creates visual representa-tions of monuments, artefacts, buildings and cultures, in order to deliver them openly to global audiences [21].

The aim of virtual heritage [22] is to restore ancient cultures as a virtual environment in which the user can immerse himself into in such a way that he can learn about the culture by interacting with the environment.

(21)

1.2. APPLICATIONS OF VIRTUAL REALITY 21

Figure 1.7: VR for telesurgery

Figure 1.8: VR for cultural heritage

Currently, virtual heritage has become increasingly important in the preser-vation, protection, and collection of cultural and natural history [23]. The world’s historical resources of many countries are being lost and destroyed [24]. With the establishment of new technologies, Virtual Reality can be used as a solution for solving problematic issues concerning cultural heritage assets.

(22)

1.3. OBJECTIVES OF THE THESIS 22

1.3

Objectives Of The Thesis

Global industrial manufacturing capacities constitute a large part of the world wealth and economy. A key component of any manufacturing business is train-ing: training a specialized workforce as well as training the customers about the produced machineries requires huge amounts of time, resources and logistic facilities. Training has spillover benefits for the industry (by providing a pool of skilled workers) and for the society (the improved employment outcomes and flow-on effects such as improved health and lower social welfare costs). Currently, in the field of industrial manufacturing training is a hugely expensive activity traditionally burdened by a number of issues such as:

• The cost of realizing and maintaining a training environment, which involves occupy locations with cumbersome machines.

• The cost of using machineries beyond the working hours. • The constant need of a supervisor for trainees.

• Security risks when a trainee uses the equipment.

These considerations have in time lead to the suggestion that the use of Virtual Reality could introduce significant benefits, by removing the need of physical mockups in the training process or at least in some of the procedures. Op-erators can train on a 3D model of the machine in absolute autonomy while staying in a controlled and safe training environment, in which damages to the real machine are reduced or avoided completely. Hence, inexperienced users can take advantage of virtual trainings before actually facing the real machine. Furthermore, the possibility to combine Computer Aided Design (CAD) with Virtual Reality is becoming a popular topic in the recent years. Using VR along with CAD would make it is possible for users to view their designed model in front of them in a realistic 3D environment, in a similar fashion to viewing the end product. This would help during development to check for any faults in the model, for seeing how it would look like in human scale in different scenarios, light settings and with a highly realistic rendering, and finally for

(23)

1.4. THESIS OUTLINE 23

manipulating the different components of the model using Motion Controllers for interaction.

This thesis has the objective of addressing these issues by creating an im-mersive Virtual Reality application that is intended to be used by industrial companies that need to train their operators on the tasks of assembly or main-tain large mechanical machines, but also for reviewing the work on industrial CAD models.

The proposed application allows the user to:

• Freely move in the virtual environment and interact with an object rep-resenting a CAD model.

• Easily create flexible training sequences that can be used to train oper-ators in assembling the CAD model.

• Review the CAD object in different realistic looking scenarios and at a human scale.

• Manipulate and modify the looks of the pieces that compose the CAD object.

The application was developed using the Unreal Engine [25] and was designed to work both on the HTC Vive and the Oculus Rift and their respective Motion Controllers.

1.4

Thesis Outline

This document has been structured in the following chapters:

• Chapter 2 presents the results of the State Of The Art collection: the current solutions in the literature regarding interaction and manipulation in Immersive Virtual Environments are analyzed and a general overview of 3D interfaces for user interaction is presented.

• Chapter 3 focuses on the system design, presenting a high-level descrip-tion of the implemented system. First it is described the initial prototype

(24)

1.4. THESIS OUTLINE 24

that was developed using the XVR framework and the reasons why the Unreal Engine was used in its place, along with a preliminary study re-garding the engine’s capabilities. The section 3.3 describes the developed application, its features and the design choices that were made.

• Chapter 4 describes the application at a lower level, in particular the realization of the system, by showing some of its most relevant classes and functions. Section 4.1 describes the design choice used for managing the Unreal Engine’s project, while section 4.2 describes its key components. Section 4.3 describes a preliminary study that was conducted.

• Chapter 5 presents the general conclusions of the thesis and some future improvements and ideas.

(25)

2. Background And Related Work

This chapter gives an overview and a description of the most relevant features offered by the libraries and software currently available for developing VR applications, with particular attention to those used for this thesis.

First it describes some of the frameworks used for the development of VR applications, among all the Unreal Engine.

In section 2.2 the XVR framework is described giving a general overview of the XVR environment. Section 2.3 presents the Unreal Engine and some of its features, listing the terminology used to refer to specific elements of the engine and eventually describing the Blueprint Visual Scripting language.

Section 2.4, outlines the current virtual reality applications used during the design process.

Finally, in section 2.5, an overview of the currently available applications of virtual reality in training is presented.

(26)

2.1. VIRTUAL REALITY FRAMEWORKS 26

2.1

Virtual Reality Frameworks

Commercial virtual reality authoring systems allow users to build custom VR applications some examples are WorldViz [26], EON Studio [27], COVISE [28] or AVANGO [29]. However, these systems are not open source and only allow the programmer to implement specific applications which often do not offer advanced support to other technologies or case scenarios.

On the other hand, academic VR frameworks are designed to be rather limited in scope, which mostly abstract from PC clusters to drive a multi-screen display device and support a variety of tracking systems such as Mechdyne’s CAVElib [30] and Carolina Cruz-Neira’s VR Juggler [31]. Nevertheless, they leave higher level tasks to the programmer, such as navigation or menu widgets. Some existing open source VR systems are just as extensible as CalVR and can be used for many tasks interchangeably with it. Examples are Oliver Kreylos’ VRUI [33] Bill Sherman’s FreeVR [34] or Fraunhofer IAO’s Lightning [35]. CalVR integrates OpenSceneGraph [36] (OSG) for its graphical output. OpenSceneGraph was modeled after SGI Performer [37] and is syntactically very similar, but it has evolved into its own programming library over the years. However, OpenSceneGraph lacks cluster-ability, tracking support, a way to describe multi-display layouts, menu widget and interaction system. Thus, OpenSceneGraph cannot drive alone VR systems.

In the latest years, many companies developed their own VR framework. Among all, one of the most common SDKs used in game engines is OpenVR [41], along its software support, SteamVR [39].

The following sections give an overview of the previously listed frameworks and their peculiarities.

2.1.1

Academic VR Frameworks

CAVELib

CAVELib provides the cornerstone for creating robust, interactive and three-dimensional environments. CAVELib simplifies programming and it is consid-ered as the most widely used Application Programmer Interface (API) for

(27)

2.1. VIRTUAL REALITY FRAMEWORKS 27

developing immersive displays. Developers focus on the application while

CAVELib handles all the CAVE software details, such as the operating and display systems and other programming components keeping the system plat-form independent.

VR Juggler

VR Juggler is a cross-platform virtual reality application development frame-work maintained by IOWA State University. Their motto ”Code once, run everywhere” sums up their goal to simplify common tasks in VR applications.

VRUI

The Vrui VR toolkit aims to support fully scalable and portable applications that run on a range of VR environments starting from a laptop with a touch-pad, over desktop environments with special input devices such as space balls, to full-blown immersive VR environments ranging from a single-screen work-bench to a multi-screen tiled display wall or CAVE.

FreeVR

FreeVR is an open-source virtual reality interface/integration library. It has been designed to work with a wide variety of input and output hardware, with many device interfaces already implemented. One of the design goals was for FreeVR applications to be easily run in existing virtual reality facilities, as well as newly established VR systems. The other major design goal is to make it easier for VR applications to be shared among active VR research sites using different hardware from each other.

Open Scene Graph

The OpenSceneGraph is an open source high performance 3D graphics toolkit. It is used by application developers in fields of visual simulation, games, virtual reality, scientific visualization and modelling. It is written entirely in Standard C++ and OpenGL and it runs on all Windows platforms, OSX, GNU/Linux,

(28)

2.1. VIRTUAL REALITY FRAMEWORKS 28

IRIX, Solaris, HP-Ux, AIX and FreeBSD operating systems. The OpenScene-Graph is established as the world leading scene graph technology, used widely in the vis-sim, space, scientific, oil-gas, games and virtual reality industries.

SGI Performer

OpenGL Performer is a powerful and comprehensive programming interface for developers creating real-time visual simulation and other professional performance-oriented 3D graphics application.

CalVR

CalVR combines features from multiple existing VR frameworks into an open-source system. It is a new virtual reality middleware system which was devel-oped from the ground up. In addition, CalVR implements the core function-ality of commonly used existing virtual refunction-ality middleware, such as CAVElib, VRUI, FreeVR, VR Juggler or COVISE. It adds to those that it supports sev-eral non-standard VR system configurations, multiple users and input devices, sound effects, and high level programming interfaces for interactive applica-tions. CalVR consists of an object-oriented class hierarchy which is written in C++.

2.1.2

Commercial VR Frameworks

WorldViz

WorldViz is a virtual reality software company that provides 3D interactive and immersive visualization and simulation solutions for universities, govern-ment institutions, and commercial organizations. WorldViz offers a full range of products and support including enterprise grade software, complete VR sys-tems, custom solution design and application development. Vizard is one of the products offered by WorldViz and it is a virtual reality software toolkit for building, rendering, and deploying 3D visualization & simulation applica-tions. It natively supports input and output devices including head-mounted displays, CAVEs, Powerwalls, 3D TVs, motion capture systems, haptic

(29)

tech-2.1. VIRTUAL REALITY FRAMEWORKS 29

nologies and gamepads. Vizard uses Python for scripting and OpenSceneGraph for rendering.

EON

EON Reality is a Virtual and Augmented Reality based knowledge transfer for industry and education. EON offers a wide range of true cross platform solutions enabling Augmented Reality, Virtual Reality, and interactive 3D ap-plications to seamlessly work with over 30 platforms. One of its goals is to bring VR and AR in the everyday life, from mobile to commercial uses, pushing the research and development on holographic solutions and immersive systems.

COVISE

COVISE stands for COllaborative VIsualization and Simulation Environment. It is an extendable distributed software environment to integrate simulations, post processing and visualization functionalities in a seamless manner. It was designed for collaborative working, letting engineers and scientists to spread on a network infrastructure. In COVISE an application is divided into several processing steps which are represented by COVISE modules. Each module is implemented as separate processes and it can be arbitrarily spread across different heterogeneous machine platforms. Major emphasis was put on the usage of high performance infrastructures such as parallel and vector computers and fast networks.

Avocado

Avocado is an object-oriented framework for the development of distributed and interactive VE applications. Data distribution is achieved by transpar-ent replication of a shared scene graph among the participating processes of a distributed application. Avocado focuses on high-end, real-time, virtual environments like CAVEs[32] and Workbenches, therefore, the development is based on SGI Performer.

(30)

2.1. VIRTUAL REALITY FRAMEWORKS 30

OpenVR and SteamVR

OpenVR is an API and runtime system that allows access to VR hardware from multiple vendors without requiring that applications have specific knowledge of the hardware they are targeting. This repository is an SDK that contains the API and samples. The OpenVR API provides a game with a way to in-teract with Virtual Reality displays without relying on a specific hardware vendor’s SDK. It can be updated independently of the game to add support for new hardware or software updates. The API is implemented as a set of C++ interface classes full of pure virtual functions.

Many VR applications and engines, such as the Unreal Engine, simplify their integration of VR by using SteamVR. It leverages OpenVR and provides the software support to offer the best possible VR experience.

2.1.3

Commercial SDKs and Game Engines

At a higher development level, engines offer a wide range of tools to improve VR applications, from graphics and animation tools to audio management and obviously gameplay and core programming for the basic elements of the application. C++ is widely used among engines, and most of them implements one or more of the previously mentioned frameworks to provide a full immersive VR and AR experience, leaving to the programmer the only task of creating the world.

Google Daydream

Google Daydream aims to bring Virtual Reality to the mobile world. Thanks to the Google VR SDKs, Android supports both Daydream and Cardboard. Google provides SDKs to develop VR applications with pure Android, Unity Engine and Unreal Engine.

Unity3D Engine and VR

Unity is a cross-platform game engine developed by Unity Technologies which is primarily used to develop video games and simulations for PC, consoles, mobile

(31)

2.2. THE XVR FRAMEWORK 31

devices and websites. The engine targets popular APIs such as Direct3D, Vulkan and OpenGL. In many aspects, it is similar to the Unreal Engine used for this thesis’ project.

Unreal Engine VR

The Unreal Engine, which will be covered in detail in the following chapters, offers a full support to the most common VR headsets and systems such as HTC Vive, Oculus Rift and Google Caradboard and Daydream. Furthermore it offers full support to SteamVR and the OpenVR framework.

2.2

The XVR Framework

XVR [44] is an Integrated Development Environment developed at the Scuola Superiore San’tAnna designed to create Virtual Reality applications. Using a modular architecture and a VR-oriented scripting language, namely S3D, XVR content can be embedded on a variety of container applications. XVR supports a wide range of VR devices (such as trackers, 3D mice, motion capture devices, stereo projection systems and HMDs). Due to its extensive usage in the VRMedia [43] group, a large number of libraries have been developed to support the mentioned devices along with many more developed at the PERCRO laboratory of the Scuola Superiore Sant’Anna. XVR evolved during the years to include its own engine, and while being a powerful and extensible graphic engine, its development has gradually slowed down.

For this thesis’ purpose, the XVR framework was used at the early stages of development to create a prototype of the application that used the HTC Vive and the Oculus Rift HMDs.

XVR is actually divided into two main modules:

• The ActiveX Control module, which hosts the very basic components of the technology such as versioning check and plug-in interfaces.

• the XVR Virtual Machine (VM) module which contains the core of the technology such as 3D Graphics engine, the Multimedia engine and all the software modules managing the built-in XVR features.

(32)

2.2. THE XVR FRAMEWORK 32

The XVR-VM contains a set of bytecode instructions, a set of registers, a stack and an area for storing methods. The XVR Scripting Language [45] (S3D) allows specifying the behaviour of the application, providing the basic language functionalities and the VR-related methods, available as functions or classes. The script is then compiled in a byte-code which is processed and executed by the XVR-VM.

In general, an XVR application can be represented as a main loop which inte-grates several loops, each one running at its own frequency, such as graphics, physics, networking, tracking, and even haptics, at least for the high-level control loop.

2.2.1

The S3D scripting language

The S3D is an object-oriented, strongly and dynamically typed scripting lan-guage with a syntax very similar to JavaScript and C++. It is possible to declare variables, functions and classes.

An XVR program is always based on a set of 7 fundamental callbacks which get automatically executed upon.

These predefined functions constitute the basis of any project:

• OnDownload() is performed at the very beginning and triggers the download of the data files needed from the application.

• OnInit() function is the place where to put the initialization code for the app. All the commands are executed sequentially. All the other functions are not active until OnInit() completes its execution. It receives as a parameter the string defined inside the XVR related HTML code under the ”UserParam” section. It gets called as soon as OnDownload() is finished.

• OnFrame() is the place for functions and methods that produce graph-ics output. This is the only function where the graphgraph-ics context is visible. Placing graphics command outside this function would produce no re-sults. It gets called at a frequency specified by SetFrameRate(). By default the frame rate is set to 100 Hz.

(33)

2.2. THE XVR FRAMEWORK 33

• OnTimer() runs independently (i.e at a different rate) of OnFrame() and it is where to put commands that must be independent from the rendering task. As the timer is hi-res, it is possible to setup timer pa-rameters with SetTimeStep() so that this function can be called up to 1k times per second. The default time step is 10 ms, therefore OnTimer gets called by default at a frequency of 100 Hz.

• OnEvent() is independent from both OnTimer() and OnFrame(). It gets called whenever the application receives an event message. Event messages can be external (i.e. Windows messages) or internal (i.e. gener-ated anywhere in the XVR program). Events and messages are supported in XVR because they add flexibility to the programming environment for tasks where fixed timers are not the best option. If the application does not need them, this function can be ignored.

• DownloadReady() is called whenever a download triggered by the File-Download() function is completed and receives as a parameter the ID (re-turned by FileDownload) of the downloaded file. This function can be used to asynchronously download resources not needed at the beginning of the XVR program.

• OnExit() is called when the application quits or when the user closes the page the application is in. It is the right place to perform nice exits of external modules.

In addition to these basic functions, XVR offers several predefined classes, functions and data structures for different purposes. For example the CVmCamera class provides functions for managing the scene and the camera properties. CVmNewMesh class, and CVmObject class, allow for managing 3D models by handling the geometric properties of the model and by dealing with reference systems and geometrical transformations. The appearance of the objects is managed by the CVmMaterial and CVmTexture classes.

(34)

2.3. THE UNREAL ENGINE 34

2.3

The Unreal Engine

The Unreal Engine is a graphic engine developed by the Epic Games team, and mainly used to develop games and 3D applications. Unreal offers a complete suite of development tools for both professional and amateur developers to deliver high-quality applications across PC, consoles, mobile, VR and AR. The engine has a strong and large C++ framework which can be used to develop any kind of Virtual Environment. It offers a large set of tools such as:

• A user-friendly, customizable and extensible editor with a built-in real-time preview of the application.

• Robust online multiplayer framework. • Visual effects and particle system. • Material Editor.

• Animation toolset. • Full editor in VR.

• Built and thought for VR, AR and XR. • A large number of free assets.

• Audio engine.

• A Marketplace where assets and plugins can be shared and downloaded. The possibilities that the engine offers allow the developer to not worry about performance or portability as every project can be easily ported to almost any platform with little or no effort. The Editor’s UI offers a lot of features and different sections such as: the Animation flowgraphs and the Blueprint event graph. The UI is meant for both programmers and designers.

2.3.1

Terminology

This subsection explains some of the the terms that will be used in the next Chapters [46]:

(35)

2.3. THE UNREAL ENGINE 35

Figure 2.1: Every object in the scene is an actor

• Project: A Project is a unity that holds all the content and code that make up an individual game.

• Objects: The base building blocks in the Unreal Engine are called Ob-jects. Almost everything in the Unreal Engine inherits (or gets some functionality) from an Object. In C++, UObject is the base class of all objects; it implements features such as garbage collections, metadata (UProperty) support for exposing variables to the Unreal Editor, and serialization for loading and saving.

• Classes: A Class defines the behaviors and properties of a particular entity or Object used in the creation of an Unreal Engine game. Classes are hierarchical as seen in many programming languages and can be created in C++ code or in Blueprints.

• Actors: An Actor is any object that can be placed into a level. Actors are a generic Class that support 3D transformations such as translation,

rotation, and scale. Actors can be created (spawned) and destroyed

through gameplay code (C++ or Blueprints). In C++, AActor is the base class of all Actors.

• Components: A Component is a piece of functionality that can be added to an Actor. Components cannot exist by themselves, however they can be added to an Actor, giving it access to the Component func-tionalities. They may be seen as modules which any entity may imple-ment.

• Pawns: A subclass of Actor that serves as an in-game avatar or persona, for example the characters in a game. Pawns can be controlled by a player

(36)

2.3. THE UNREAL ENGINE 36

or by the game’s AI, in the form of non-player characters (NPCs). • Character: A Character is a subclass of a Pawn Actor that is intended

to be used as a player character. The Character subclass includes a collision setup, input bindings for bipedal movement, and additional code for movement controlled by the player.

• Level: A Level is a user defined area of gameplay. Levels are created, viewed, and modified mainly by placing, transforming, and editing the properties of the Actors it contains.

• World: A World contains a list of Levels that are loaded. It handles the list of Levels and the spawning/creation of dynamic Actors (created at runtime).

• GameModes: The GameMode Class is responsible for setting the rules of the game that is being played.

• GameInstance: The GameInstance class contains information shared through all levels within a world. It’s the first entity created when a game session starts and the last one to be destroyed when the game ends.

2.3.2

The Blueprint Visual Scripting language

Figure 2.2: Example of a Blueprint graph

The Unreal Engine offers a new, different way of defining the game core logic and its entities’ behavior. The Blueprints Visual Scripting [47] system is

(37)

2.3. THE UNREAL ENGINE 37

a complete gameplay scripting system based on the concept of using a node-based interface to create gameplay elements from within Unreal Editor. It may be used to define object-oriented (OO) classes or objects in the engine. Objects defined by using the Blueprint scripting language are often referred as ’Blueprints’. The system itself is designed to be powerful and flexible, and is mainly aimed to designers and graphics with close to no C++ programming background (for this reason the Epic Team is pushing toward its development). It especially gives the possibility to define functions and objects which can be directly spawned as nodes in a Blueprint graph.

The purpose of Blueprint is generally: • Extending classes

• Storing and modifying default properties • Managing subobject instancing classes

The concept is that programmers would set up base classes to expose a set of functions and properties as Blueprint nodes. Blueprints can be classified by their use:

• A Blueprint Class is an asset that allows content creators to easily add functionality on top of existing gameplay classes. Blueprints are created inside of Unreal Editor visually, instead of by typing code, and saved as assets in a content package. This particular kind of classes have been largely used for this thesis.

• A Data-Only Blueprint is a Blueprint Class that contains only the code (in the form of node graphs), variables, and components inherited from its parent. These allow those inherited properties to be tweaked and modified, but no new elements can be added. These are essentially a replacement for archetypes and can be used to allow designers to tweak properties or set items with variations.

• A Level Blueprint is a specialized type of Blueprint that acts as a level-wide global event graph. Each level in a project has its own Level

(38)

2.3. THE UNREAL ENGINE 38

Blueprint created by default that can be edited within the Unreal Edi-tor, however new Level Blueprints cannot be created through the editor interface.

• A Blueprint Interface is a collection of one or more functions, which serves the same purpose as C++ interfaces.

A Blueprint is generally composed by: • Functions.

• Variables.

• An EventGraph, which contains a node graph that uses events and function calls to perform actions in response to gameplay events as-sociated with the Blueprint. This is where interactivity and dynamic responses are setup.

2.3.3

C++ development

In order to develop using C++, the UnrealBuildTool has to be executed for creating the C++ project, which uses Visual Studio as IDE. There are several configurations available that determine how the engine is compiled:

• Debug: This configuration contains symbols for debugging. This con-figuration builds both engine and game code in debug concon-figuration. • DebugGame: This configuration builds the engine as optimized, but

leaves the game code debuggable. This configuration is ideal for debug-ging only game modules.

• Development: This configuration is equivalent to Release. Unreal

Ed-itor uses the Development configuration by default. Compiling your

project using the Development configuration enables the programmer to see code changes made to his project reflected in the editor. This is done with the HotReload feature, that allows the Unreal Editor to detect newly compile DLL files and load them automatically.

(39)

2.3. THE UNREAL ENGINE 39

• Shipping: This is the configuration for optimal performance and ship-ping the application. This configuration strips out console commands, stats, and profiling tools.

• Test: This configuration is the Shipping configuration, but with some console commands, stats, and profiling tools enabled.

Figure 2.3: User interface of Visual Studio 2017

2.3.4

Unreal Engine open source

Epic Games makes available the entire source code of the engine in GitHub [48]. The repository is set as Private, but any GitHub account can sign up in the Epic Games’ account in order to see the code.

Any programmer can aid the development of the engine and see the latest changes before the new official release. More over, developers can compile the entire source code in order to modify the engine to suit their needs for their current project.

It is also very helpful in order to clearly understand how certain features are implemented, to determine performance issues or bugs.

(40)

2.4. VR APPROACHES FOR IMMERSIVE DESIGN 40

Figure 2.4: The Unreal Engine GitHub page

2.4

VR approaches for Immersive Design

According to [53], the current application of Virtual Reality systems in the design process is limited mostly to design review. Weidlich et al., identify the reason for this limitation in the different data formats used for CAD and VR visualization, and propose the use of VR systems also during the early design stage in order to drastically reduce potential source of error especially during the outline and detailing phases.

The ways in which researchers have implemented the immersive modeling are: • Linking a VR system and a CAD core. An example is ARCADE [49], a cooperative, directly manipulative 3D modeling system that takes a user-centered approach.

• Using voxel models for geometry description. Examples are VCM [50], that allows virtual modeling by adding and removing voxels and uses haptic feedback, and VADE [51], in which the VR environment used is a HMD.

(41)

2.5. VR FOR TRAINING 41

2.5

VR for Training

The use of fully immersive Virtual Reality systems for training purposes has been extensively investigated in literature. A number of challenges have been highlighted, ranging from minimizing overall latency, interacting intuitively in the virtual environment, increasing user’s perceptual awareness of the virtual world and providing the user with a strong sense of immersion and embodiment [52]. Examples can be found in the mining industry [54], in the aerospace industry [55], in the automotive industry [57], in logistics [58] and, in general, in the sector of maintenance [59].

Opportunities provided by Mixed and Augmented Reality address issues sim-ilar to those addressed by VR, although with a slightly different perspective. In Mixed Reality, in fact, the real environment is not substituted by a vir-tual counterpart; rather it is integrated with virvir-tual elements that enhance its information content. Therefore many safety issues effectively tackled by VR are commonly not addressed by MR systems which, in turn, may result more effective whenever the real context is fundamental.

In general, one of the most important consequences of leaving the training experience to a totally virtual context (as in VR), or keeping the vision on the real context (as in MR), is related to the body self visual perception. In VR, depending on the visualization device, users can still see their body (for instance in a CAVE) or not (if using a HMD; in this case a digital avatar must be shown in order to allow self perception). In MR the real context, including own body, is always present. This has of course an impact in training, especially in tasks where manipulation operations, or other types of direct interaction with the body, take place. Avatar representations, in fact, might not correspond exactly to the dimensions or the current posture of the user and might, although slightly, mislead the self perception and limit the effectiveness of the virtual training.

This analysis resulted in the work of [56], in which a system was developed to allow the user to see his own real hands by means of a 3D camera placed on a HMD. Through simple gestures, the user could grab and move the components of a a machine in order to perform training tasks in which he was asked to

(42)

2.5. VR FOR TRAINING 42

(43)

3. Overall System Design

After having investigated the technologies and methodologies used today for interaction in virtual environments, it is clear that there is a multitude of elements that can be adopted to create a virtual reality application. Given the high number of possibilities, it was important to choose the right environment to develop the project.

The main requirements that were taken into account when choosing the frame-work to use to create the system where:

• Flexibility: the framework should allow the programmer to easily create a virtual environment and easily develop systems in it.

• Full support for VR headsets: it is essential that new headsets such as the Oculus Rift and the HTC Vive are fully functional in the environ-ment.

• Possibility to efficiently load meshes at runtime: since the appli-cation needs to handle objects that are unknown at compile time, the framework should be able to load meshes at runtime with fast loading times.

• Rendering capabilities: in order to achieve the illusion of reality in the application, the framework needs to be able to create realistic looking environments, whilst still maintaining high framerate to avoid motion sickness for the user.

This chapter describes the choices that were made in order to develop the system, the difficulties that were encountered and how they were addressed. The development started with a prototype created using the XVR framework, which was then discarded in favor of the Unreal Engine 4. The reasons for switching framework are described in Section 3.2.

(44)

3.1. THE XVR PROTOTYPE 44

3.1

The XVR prototype

As described in Chapter 2, XVR is a very flexible framework designed for VR and it has already proved its worth in many academic applications. For this reason, it was the first choice as the framework to use in order to develop the system.

3.1.1

Support for HMDs

The support for the new retail versions of the HTC Vive and Oculus Rift as an external C++ library was added in order to create the prototype. The library was created integrating the API offered by the open-source OpenVR SDK and it allows an XVR application to:

• Access the position and orientation of the connected HMD and any (op-tional) Motion Controllers.

• Retrieve the size of the Play Area, the area in which the user is able to walk.

• Obtain any information regarding inputs from the Motion Controllers. • Trigger haptic feedback to the Motion Controllers.

• Handle the stereo rendering for the HMD.

3.1.2

User movements

At first, the main objective was to address the issues regarding the user’s movements in the virtual environment.

Users should be able to physically walk around in the room, while their virtual avatar should follow in the virtual environment. This is achieved by exploiting the tracking capabilities of the Lighthouse technology offered by the HTC Vive or the Constellation system developed for Oculus Rift devices. This improves substantially the immersion of the users, allowing them to really perceive the object as if it were standing before them, while also decreasing motion sickness.

(45)

3.1. THE XVR PROTOTYPE 45

However, the aim of the application is to display any CAD object, the size of which can greatly surpass the size of the Room-Scale capabilities of the tracking solutions offered by the HTC Vive and the Oculus Rift. For this reason, it can’t be assumed that the whole CAD object can be contained in the room in which the application runs. As a consequence, the users wouldn’t be able to physically walk around the CAD object without reaching the physical room’s boundaries.

To overcome this limitation, it was decided that a teleportation system was the best solution for this situation. With the press of a button, users are able to teleport themselves around the virtual environment, while still being able to walk around the physical room. They can teleport above any surface that is flat enough and not too steep to be walked on.

Figure 3.1: The teleportation guidance for users. The circle represents the position they would end up to, while the rectangle shows the boundaries of the physical room

To limit the motion sickness derived from the sudden movement introduced by the teleportation mechanism, different solutions were considered, such as mov-ing slowly the player from the initial point to the endmov-ing point when needed. In the end, a simple fade effect was considered the best approach: when a user starts the teleport action, these steps are executed:

(46)

3.1. THE XVR PROTOTYPE 46

1. The screen fades to black quickly but smoothly.

2. The system moves the virtual avatar of the user in the desired position. 3. The screen fades back from black to normal.

When users are pressing the button to teleport without releasing it, they can see where they would end up in the virtual environment with a small, animated

circle in the desired position. Furthermore, users are able to see with an

animated rectangle the boundaries of the physical room they’re in and where they’re positioned within the room. This way, they are able to clearly see what can be physically reached in the new position after teleporting, as it can be seen in Figure 3.1.

With the press of another button (the grip button on HTC Vive’s controllers and the thumbstick on the Oculus Touch), users can still show at any moment the rectangle representing the boundaries of the room, which turns red when they’re close to the edge. This way users are always able to see how far they can physically walk before reaching the constraint of the physical room they’re in, avoiding any potential injuries to themselves.

(47)

3.1. THE XVR PROTOTYPE 47

3.1.3

User interaction: menu

With the user’s navigation implemented, the focus moved to user interaction. In particular, to the interaction over menus. Since the application should allow the user to interact in some way with the system, it was important to find a way that only involved the use of the Motion Controllers as pointing devices and that overcame the problem of where to position the menu in the virtual environment, considering that the virtual world in which the virtual avatar moves can be indefinitely large, whilst the menu needs to be always available. The obvious choice was to directly incorporate the menu into one of the Mo-tion Controllers, in a way that resembles a sort of tablet, while the other Motion Controller, from now on referred to as “Action Controller ”, is used as a pointing device.

Figure 3.3: The 3D menu over the Motion Controller

The menu was then structured as a floating 3D cube that has the Motion Con-troller itself as its center and is rendered on top of it, always following its every movement. On each face of the cube icons were arranged, each representing a different tool with a short tooltip that described the tool’s purpose, accessible by pointing the Action Controller on the icon and then pressing the trigger button. The cube itself could be rotated by the user either by swiping left or right on the touchpad of the Motion Controller holding the menu, if the user

(48)

3.1. THE XVR PROTOTYPE 48

was using the HTC Vive, or by moving left or right the analog thumbstick if he was using the Oculus Rift; this way the user is able to reach all the tools in the 3D menu without having to physically rotate the hand holding the Motion Controller with the menu attached.

The code that managed the menu was structured in such a way that new tools could be easily added to the menu, each with its own logic. For the purpose of the prototype, only a simple tool that allowed the user to place 3D notes in the virtual environment was introduced. The tool’s job was mainly to show how different information could be displayed when a tool was selected: the face of the cube in which the tool was present changes with a fluid animation when the tool is activated, showing on the same face any information related to it. For the 3D notes’ tool, it showed a simple color picker implemented in the HSV color space, that could be used by the user to select a color for the 3D notes by simply pointing the Action Controller and pressing the trigger button. When the user instead points away from the menu and the color picker, the 3D note appears at the tip of the Action Controller and it can be released in the virtual environment by pressing the trigger button, allowing another one to be created in the same way.

Figure 3.4: The notes’ tool, with the HSV color picker (Left) and the notes (Right)

(49)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 49

3.1.4

User interaction: object manipulation

In the prototype, simple object manipulation was implemented with grabbing mechanics. When the Action Controller is close enough to a piece of the CAD object, the Action Controller itself is highlighted, to indicate that an action can be performed. If the trigger button is pressed, the piece in front of the Action Controller is grabbed and starts moving along with the controller, until the button is released.

Figure 3.5: (Left) the Action Controller is highlighted when close to the piece (which is a test cube in this case). (Right) the piece is being grabbed by the user

3.2

The Unreal Engine 4: feasibility study

At this point, the basic elements for the interaction were implemented in the XVR framework. However, the necessity of achieving realistic looking envi-ronments highlighted the need to experiment with other solutions. In fact, although the XVR framework proved to be a valid choice as the main tool of the project, its rendering capabilities are now surpassed by other modern engines.

In particular, the attention fell on the Unreal Engine 4. Its vast framework, deep support for VR and robust rendering system based on the DirectX 11 pipeline are some of the reasons why it became a popular engine not only in the gaming scene, but also in the academic’s one.

(50)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 50

Despite these qualities, in the context of this thesis there were some concerns regarding the feasibility of the project, particularly regarding the loading of meshes at runtime. The Unreal Engine 4 requires developers to load meshes in the Editor first, before they can be used in the Virtual Environment. This goes in contrast with the fact that CAD objects’ meshes are not built-in in the project itself but are to be passed from an external tool. As a consequence, the first obstacle was to load a mesh at runtime using only the Unreal Engine 4 framework.

3.2.1

The Unreal Engine: an overview

To describe how the runtime mesh loading was accomplished inside the Unreal Engine and how an evaluation of the results was made, an overview of the basic elements of the engine is presented. In fact, the Engine offers a large set of functionalities:

• Advanced Real Time rendering • Visual and Post-Process effects • Materials pipeline

• Animation and Physics • Artificial Intelligence • Complete Audio system • and many more...

A project inside the Unreal Engine can be developed by using:

• Unreal C++: A large API which redesigns the way C++ is used to create projects and application elements.

• Blueprint Visual Scripting: A visual scripting language which runs in its own Virtual Machine. It compiles to an intermediate form instead of directly compiling to machine code.

(51)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 51

The Blueprint Visual Scripting language is meant for designers and newcomers and aims to simplify the development of simple applications. For big projects, its graph may become large and intricate. Furthermore, a Blueprint graph needs to be replicated in case of usage in different projects, and it can be up to ten times slower than C++. Blueprint can do everything that concerns the logic part of the application (the gameplay), while it is not meant to manage graphics, audio or other aspects of the application.

The two approaches (Unreal C++ and Blueprint) are not mutually exclusive and they can work together. In fact, C++ classes and functions can be directly exposed in the Blueprint system, allowing it to easily extend the functionalities. The Epic Games team prompts developer to use the C++ for the most complex and time-consuming tasks, which should be then exposed in Blueprint graphs to create a fast development workflow.

Relevant Gameplay Elements

Gameplay elements are part of the Engine and define the ”actors” of the ap-plication, the entities that take up a role in the virtual environment. Here are listed some:

• Actor: ”An Actor is any object that can be placed into a level. Actors are a generic Class that support 3D transformations such as transla-tion, rotation and scale. Actors can be created (spawned) and destroyed through gameplay code (C++ or Blueprints)”.

• Pawn: Each Pawn is an Actor associated with a specific controller. A Pawn is basically the user-playable actor in the virtual world.

• Character: A specific case of a Pawn.

An Object (which can be also called Actor), may be seen as ”[...] Containers that hold special types of Objects called Components. Different types of Com-ponents can be used to control how Actors move, how they are rendered, etc. The other main function of Actors is the replication of properties and function calls across the network during play.”.

(52)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 52

Components are defined as Objects ”[...] Generally used in situations where

it is necessary to have easily swappable parts in order to change the behavior or functionality of some particular aspect of the owning Actor ”.

For instance, an actor may include components such as:

• UActorComponent: Is the base class for components that define reusable behavior that can be added to different types of Actors.

• USceneComponent: A SceneComponent has a transform and supports attachment, but has no rendering or collision capabilities. Useful as a ’dummy’ component in the hierarchy to offset others.

• UPrimitiveComponent: PrimitiveComponents are SceneComponents that contain or generate some sort of geometry, generally to be rendered or used as collision data.

Components can be assembled inside an Actor to form a hierarchy. A compo-nent that is placed as ”child” of another Compocompo-nent moves, rotates and scales as the parent Component.

The performance profiler

Performance is an omnipresent topic in making real-time 3D applications. In order to create the illusion of moving images, a frame rate of at least 15 frames per second is needed. Depending on the platform and the applications, 30, 60, or even more frames per second may be the target. In the case of VR, the target is set to 90 to avoid motion sickness from the user.

Unreal Engine provides many features and they have different performance characteristics. In order to optimize the content and the code to achieve the required performance, it is important to keep track of where the performance is spent. For that, the engine profiling tools are available.

(53)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 53

Figure 3.6: The GPU Profiler showing the analysis of a single frame of the application

For the purposes of this thesis, the performance of the GPU was deemed more important, since the CPU only handles minor things. The GPU instead is tasked with the objective of rendering each frame for both eyes due to the VR stereo rendering while still maintaining a high framerate.

In this regard, the GPU profiler was deeply used to understand how new rendering additions would affect the overhaul rendering performance, such as the ones that are described in the next subsection.

Furthermore, Unreal provides the Unreal FrontEnd (UFE) tool, which is a tool intended to simplify and speed up development and testing tasks. UFE is designed to be the central interface for all development, profiling, and testing tasks and can also be used remotely.

(54)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 54

Figure 3.7: The Unreal FrontEnd tool. A stats file is opened and is being analyzed

application using the stat startfile and stat stopfile commands, which generate a statistics file. This file was then opened with the UFE tool and the chunks analyzed for any faults.

3.2.2

Runtime mesh: loading

Recently, Epic Games released an experimental official plugin that offers a new Component class called ProceduralMeshComponent, available both for C++ and for the Blueprint Visual Scripting system. This class allows a programmer to manually create (and modify) meshes both at compile time and at runtime by directly specifying:

• Vertices that compose the mesh • Indices that make up the triangles • Per vertex normals (optional)

• UVs (optional array of texture co-ordinates for each vertex) • Vertex Colors (optional array of colors for each vertex)

(55)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 55

• Tangents (optional array of tangents vector for each vertex)

Furthermore, the Component manages the (optional) creation of the collision shell used for basic interaction over the mesh, and it allows to divide a mesh in sections, each one containing a different material. By using this Component along with a parser for a given type of mesh, the runtime object creation could be accomplished.

In order to better understand how the plugin works at a lower level, the Unreal Engine source code was downloaded from the GitHub repository, compiled, and the parts relative to the procedural mesh creation were investigated.

To test it, the Assimp library [65] was used. Assimp is a portable open-source C++ library that allows a developer to easily parse various well-known 3D model formats in a uniform manner. In this case, it was used to load meshes in OBJ format. The library handled the creation of the vertices, indices, normals and everything else, which were then passed to the ProceduralMeshComponent in order to create the mesh.

Figure 3.8: A test mesh loaded at runtime in a sample level. No materials have been applied

At this point, basic meshes could be loaded, but no materials were applied, since they should be converted in some way in Unreal materials. Since new

(56)

3.2. THE UNREAL ENGINE 4: FEASIBILITY STUDY 56

materials cannot be defined at runtime, the solution was to create a template material in the Editor, containing parameters to define properties such as the output color and texture. At runtime the steps needed to create the materials for the mesh are the following:

1. An instance of the template material is created.

2. Using Assimp, the materials’ information stored in the MTL file annexed to the OBJ mesh is parsed.

3. The retrieved information is then used on the new Unreal material to manually recreate the desired material.

The results can be seen in Figure 3.9. Unreal materials that use transparency are handled differently with respect to opaque materials because of the De-ferred Shading [60] pipeline. Therefore, another template material was created to compensate. When the mesh’s materials are parsed, depending on whether they use any kind of transparency, the translucent template material is used instead of the standard template material.

Figure 3.9: Another test mesh loaded at runtime in a sample level. Materials are included

Riferimenti

Documenti correlati

Therefore, in order to analyze and evaluate the role of the process- es of creating and disseminating professional knowledge in a particular industry (the coal industry in our

Sustainability implies the achievement of a reasonable balance between socio-economic development and preservation of the environment, the extension of the

Cinemax, MTV, Showtime, The CW, Tv Land, TNT, Starz, Wgn e tutti gli altri considerati nella tabella 5.21, non permettono alle serie tv di raggiungere una parte di

Hay suficientes testimonios documentales del siglo X que nos revelan la claridad con que los coetáneos percibían el ejercicio del poder político como algo bien diferenciado de

(2017) presented an interesting study on the genetic variants of β-casofensin, a bioactive peptide corresponding to bovine β-casein A 2 f94–123,.. with promising effects

Nel caso specifico del rapporto con il fascismo e dell’adozione di espressioni tipi- che della cultura fascista, in particolare delle tendenze razziste insite in questa con-

The second stage did not occur until just a few years ago, when some progressive groups finally began to use cost-benefit analysis to their advantage by providing analyses that

In fact, for the purpose of this study, I was given direct access to the workings of the Paoletti, Baroni & Baldi law firm in Pisa.. My thesis is divided into