Virtual reality and data collection
Virtual reality is growing quite rapidly and is gaining momentum in various industry areas. Most people link virtual reality (VR) with video games, and it’s true that video games constitute the bulk of current VR content. However, VR offers incredible potential outside of the entertainment industry for applications that can benefit businesses of all kinds.
Moreover, VR is an invaluable source of data that, if properly harnessed, can tremendously profit industries. Data capture in VR is oftentimes a hidden feature of industry-oriented VR applications, but it is maybe one of the most important aspects. In this article we’re going to discuss the types of data that are available through VR equipment and what are the benefits associated with those data from a business perspective.
Everything you need to know about virtual reality
Why collecting data?
Now, one may wonder why we need to capture all these data and for what use. Well, it all depends on the type of VR environment being created and the purpose thereof. In some VR cases such as training simulations, it may be needed to capture all available data, whereas in a casual tourism app or a VR house visit, there’s no point really in capturing all these actions.
In the latter case, capturing the users’ position and gaze (head & eye tracking) during the session may be well enough, whereas in the former case of training simulations, one may need to capture everything from the position and rotation to interactions with VR gloves, full-body exoskeletons, external trackers, etc.
Generally, the use of VR data falls in either of two broad categories: 1) analysis and 2) dynamic feedback. Analysis refers to capturing and persisting the VR data during a session to be later analyzed for various business-oriented purposes. However, there’s a major part where the data captured are used in real time to dynamically adapt the VR experience. Naturally, in VR we constantly capture data and use them in real time to dynamically change colors or positions of objects or create unexpected scenarios of rain, fire, explosions, clouds, darkness, etc.
In fact, every action done by the user is captured in real time and used to determine the appropriate response to that action. For instance, in a car simulator under rainy conditions, if the driver goes too fast, the speed and location of the car at that precise moment are captured and used to generate an accident or uncontrolled skidding.
Types of Data and Use Cases
When thinking about VR, one may wonder about the types of data that VR produces, and how we can track them for good use. Well, the good news is that, unlike in real life where it’s practically impossible to track everything that a person is doing, in VR everything by default is so easy to track and capture for later analyses or for other purposes. In fact, the data available in VR are tantamount to having a real person wear a GPS tracking device, speedometer, and a full-body sensor suit while they do a training session or some other activity.
In addition, in VR we create and control the digital environment in which the user is placed. Therefore, all objects around the user allow to capture even more data. There are roughly three main data types: core, secondary, and derived, which will be explained in the following sections.
Core Data Items
This type of data represents the most fundamental information available from any VR equipment, even the most basic ones.
The most important piece of information coming from VR is the accurate 3D coordinates (x, y, z) of the user at any given time during a VR session. The VR headset is the center of all tracking for positioning the user’s virtual body in VR, and these coordinates are a three-dimensional vector representing the actual location of the user’s head in a 3D plane in the X, Y, and Z axes.
Obviously, these coordinates are at head level, so if there’s a need to know the feet position, then we subtract the person’s height from the head position. A person’s height is already pre-configured in the VR system on initial setup so that every VR environment is automatically calibrated to the real person’s height.
Normally, the default unit of measurement used in VR for any coordinate is dictated by the game engine used. For instance, in Unity, which is a very reputed game engine for VR, units are in meters. Unreal Engine, another top game engine, uses centimeters by default. However, it’s possible to change that default unit for the engine. However, it’s possible to change that default unit for the engine. Consequently, any coordinate captured in VR is based on that default unit of measurement.
With the combination of head position and time, it’s rather easy to reconstruct the exact trajectory taken by a user in the VR environment For instance, let’s take the case of a VR reconstruction of an ancient city where users can wander about in the city and explore artifacts, structures, etc.
Data can be tracked and used to analyze typical paths taken by most users and which areas of the virtual city seem to get more attention from visitors. Given this analysis, the application designers can choose to provide more historical contextual information in those areas instead of focusing on areas where people don’t spend too much time.
The other very important data item is the headset orientation or rotation, which is represented as Euler angles around the X, Y, and Z axes.
Do you want to know where your users are looking in your VR environment and create a 3D heatmap of areas getting most attention? Or are you interested in knowing whether your trainees are easily distracted instead of being focused on their job? Maybe you want to know where a student car driver was looking before doing a virtual car crash in order to give him proper advice. None of that would be easily done in real-life, but with the wealth of VR data available, this is readily available.
As discussed above, head orientation is the default way to capture user gaze, that is where they are looking. In VR, a feature called raycasting shoots a virtual laser beam from the center of the user’s gaze out until it hits the nearest virtual object. That target indicates the object the user was most likely looking at. This is extremely useful because knowing the mere location of the person is not sufficient to fully understand how they interacted with the VR environment.
Position and rotation would be insufficient without the time variable. In VR, we have access to both the system clock (current time) as well as a “game” clock (game duration) which both provide extremely precise time information related to anything going on in the game environment. At any given point during a VR session, it’s possible to map every piece of data captured with a sub-millisecond timestamp of when the event happened.
For instance, head position and rotation can be mapped with the time and thus capture the entire trajectory taken by a user from the beginning of the session till the end. As another practical example, time allows one to know that at time X the user was at location Y and was looking at object Z, and then he stayed at that location staring at the same object for 10 seconds.
Another key aspect to consider is tracking of hand positions and movements. In 6 DoF (Degrees of Freedom) VR, there are always two hand controllers that allow interactions to take place with the VR world. Similarly to the headset, these hand controllers have positional tracking which allows us to capture their exact XYZ position and rotation at any given time. In the case of 3 DoF VR equipment, it’s possible that only one hand controller be available, in which case only rotation is available. in which case only rotation is available.
So these data constitute the most elementary data items available in any VR scenario. The data items discussed in the next section are optional data available depending on additional equipment.
A few newer VR headsets have started integrating eye-tracking whereby the exact orientation of the pupils can be tracked to more accurately determine the gaze direction. This feature is available by default in the HTC Vive Pro Eye, the Varjo VR-1/XR-1, and FOVE. Other devices may use external eye-tracking kits such as Tobii VR and Pupil Labs to add eye-tracking.
One of the more obvious benefits of eye-tracking is the ability to know with utmost accuracy where the eyes were looking, and not only the head. This opens up possibilities for more natural and intuitive interactions in VR. For instance, by looking at an object for more than 2 seconds, some contextual information could appear or a button could pop-up allowing to do something with that object.
A less obvious, though extremely important, application of eye-tracking is called foveated rendering. This new technology mimicks how the human eye works. When we focus on an object with our eyes, the area we’re looking at is very detailed and crisp, but the immediate surroundings are blurred. This amazing feature of our eyes alleviates the amount of input our brain has to process.
Otherwise, if every single object in our entire field of view was in ultra-high resolution, our brain would be overloaded. That’s what foveated rendering tries to achieve with eye-tracking by rendering the image where our eyes look in high resolution and providing lower-resolution images for our peripheral vision. This way, the demand on the GPU is lowered, which allows either for more advanced interactions or higher-quality 3D models because anyways the graphic processing power spent on rendering ultra/high-resolution peripheral images is wasted since our human eyes don’t even benefit from it in VR.
With additional equipment such as VR gloves, exoskeletons, and body trackers, one can track all body movements from the head to the feet. VR gloves allow to capture the exact position and pressure of every finger. Furthermore, they provide advanced haptic feedback whereby the user will actually feel the virtual objects.
Trackers such as HTC’s Vive Trackers can be attached to any limb (arms, elbows, legs, knees, feet, hips) and even to external objects such as a baseball bat, a pipe wrench, a racket, a fire hose, a virtual gun for police or military training, etc. These provide additional tracking possibilities for either your full body or for an external piece of equipment that the VR user is working with.
In the case of body limbs, these trackers will transmit their exact position and rotation, which can be used to reproduce a real body in VR where users can see their bodies instead of just having a head (for both single-user and multi-user environments).
The main goal of full-body tracking is to provide a more realistic user experience and enable more powerful and natural interactions. Full-body tracking being quite wide a topic, we’ll start with the first component, which is more advanced hand tracking.
Normal hand controllers just provide positional tracking and programmable buttons for specific actions. For instance, the Vive controllers provide a trigger, two side grip buttons, a pressable touchpad, and a menu button. VR interactions are possible through a combination of these buttons as determined by the programmer. However, this type of interaction is less natural compared to what humans are used to. Therefore, one can use VR gloves to allow extremely natural interactions.
Moreover, VR gloves provide advanced haptic feedback to mimick real feelings when interacting with objects. With these gloves, the user can perform very precise actions on small objects or use gestures. The data available through gloves are endless. You can track each finger’s position and use these data for all kinds of interactions. In that case, tracking individual fingers may not be as relevant for analysis purpose as it would be for improving the user experience.
Another hand-tracking device is called the Leap Motion which is a 3D camera that can be mounted on a VR headset that tracks the user’s real hands and reproduces them in VR. This means that users can use their bare hands to perform all kinds of operations in VR.
One downside to the Leap Motion is the lack of haptic feedback that you get from using VR gloves. A good VR headset combined with VR gloves or good hand controllers provides a very realistic user experience for most people. However, some will want to push the envelope and get even more realistic and immersive. That’s where VR ‘exoskeletons’ and body trackers come into play.
Starting with body trackers, these are small wireless devices that can be strapped, velcroed, or somehow attached to any body limb or physical object, thus allowing us to integrate them as part of the VR experience. With the body trackers, one can use his feet or arms to interact with objects or simply to see his or other players’ limbs instead of heads suspended in mid-air as is normally the case.
From a data perspective, lots of information are available through these trackers and can be harnessed to understand the movements a person is doing in VR as well as how they’re using those external objects. For instance, in sports one could practice tennis using a real racket to which a Vive Tracker is attached. From the tracker it would be possible to track the speed of hitting a virtual ball and where the ball hit the racket.
VR suits, or exoskeletons, are much more complex and not as common-place as the other VR equipment. Though they offer tracking data, their main objective is to provide a more realistic experience to the user by having the entire body feel the virtual world.
If we take the case of Teslasuit, this suit provides full-body haptic feedback allowing the user to experience real-life feelings such as temperature, physical exertion, touch, etc. It allows to track all actions and movements done by the body, which data can be used for analysis purposes or simply to improve the user experience in VR by dynamically adapting the environment to the user’s actions and movements. It comes with built-in body sensors to track users’ physical response to emotion, stress, etc., in the VR environment.
It comes with built-in body sensors to track users’ physical response to emotion, stress, etc., in the VR environment. It’s important to know regarding VR suits that this really is a technology in its infancy and not yet widely available. This represents a niche market more oriented for businesses who can afford it for very specific VR scenarios. Many of these suits are not yet production-ready, and some have in the past gone out of business because of the incredible costs involved in the production of such high-tech equipment.
Most VR headsets have a built-in microphone, which can be used to capture what users are saying while they do a VR session. For instance, most users will mumble or exclaim themselves while doing VR, which can be captured with their consent.
It’s also possible to integrate other equipment such as smartwatches and fitness trackers into a VR ecosystem where all these data are captured and timestamped along with the other core and secondary data. For instance, it’s possible to capture a user’s heartbeat while doing some action in VR or going through a stressful situation.
What we have discussed so far are raw data available as is from various types of VR gear. However, these raw data allow us to identify other derived data that are more useful from a practical point of view.
One of them is speed of movement. VR does not provide a built-in speedometer feature that tracks how fast a user is moving because head position and time are all we need to calculate the speed of movement. We only need to calculate the distance using the head’s coordinates and then divide by the time elapsed between these coordinates. This data is very important in certain VR environments such as driving and flying simulations or training scenarios where time is important for a successful operation.
Most VR systems allow multiple users to either share the same physical space (e.g. HTC Vive) or be in different locations while seeing each other’s avatar in VR. This is called a multi-player game setting. This does not make sense for all VR scenarios, but in some training contexts where an operation has to be performed by multiple users, this comes in handy. In those cases, all the same data discussed above can be tracked from all the users simultaneously.
Since we know the position and rotation of both the head and hands as well as possibly other body limbs, the next logical step is to capture every single action done by the users in the VR environment. Every time a body tracker (i.e. headset, hand, limb tracker) touches a virtual object, we know exactly which part of the body touched which object.
Combining these with the buttons on the hand controllers or hand gestures, it’s possible to differentiate between a mere touch, a hit, and a grab. The grab action allows someone to move virtual objects around and even to touch an object with an object attached to the virtual hand. All these interactions can be captured including the impact the action had on the object. In other words, we don’t just know that the hand touched or grabbed an object, but we know that the object was moved 2.5 meters to the left of the user.
With all the VR technologies these days, super-natural features are possible such as laser-pointing an object and interacting with it, flying, hovering, instantaneous travel, etc. All these constitute derived data that can be tracked to give more insight into how people are using VR. homme travaillant avec VR_Exseva Consultants Inc.
Types of VR Devices/Equipment/Technology
VR is a very promising technology that will bring many benefits to businesses, and though VR is very attractive in itself, one often overlooked important driving force behind its success is the wealth of data it provides and the relative ease with which these data can be captured and used to create better VR experiences as well as provide precious feedback to stimulate business growth. There are so many practical use cases for VR in businesses that the only limitation really is our imagination!