Renan Guarese1,2, Pererik Andreasson3, Emil Nilsson3, Anderson Maciel1. 1. Federal University of Rio Grande do Sul (UFRGS), Institute of Informatics (INF), Porto Alegre 91501-970, Brazil. 2. Royal Melbourne Institute of Technology (RMIT), Melbourne 3001, Australia. 3. Halmstad University (HH), School of Information Technology, Halmstad 302-50, Sweden.
Abstract
In electrical engineering, hardware experts often need to analyze electromagnetic radiation data to detect any external interference or anomaly. The field that studies this sort of assessment is called electromagnetic compatibility (EMC). As a way to support EMC analysis, we propose the use of Augmented Situated Visualization (ASV) to supply professionals with visual and interactive information that helps them to comprehend that data, however situating it where it is most relevant in its spatial context. Users are able to interact with the visualization by changing the attributes being displayed, comparing the overlaps of multiple fields, and extracting data, as a way to refine their search. The solutions being proposed in this work were tested against each other in comparable 2D and 3D interactive visualizations of the same data in a series of data-extraction assessments with users, as a means to validate the approaches. Results exposed a correctness-time trade-off between the interaction methods. The hand-based techniques (Hand Slider and Touch Lens) were the least error-prone, being near to half as error-inducing as the gaze-based method. Touch Lens also performed as the least time-consuming method, taking in average less than half of the average time required by the others. For the visualization methods tested, the 2D ray casts presented a higher usability score and lesser workload index than the 3D topology view, however exposing over two times the error ratio. Ultimately, this work exposes how AR can help users to have better performances in a decision-making context, particularly in EMC related tasks, while also furthering the research in the ASV field.
In electrical engineering, hardware experts often need to analyze electromagnetic radiation data to detect any external interference or anomaly. The field that studies this sort of assessment is called electromagnetic compatibility (EMC). As a way to support EMC analysis, we propose the use of Augmented Situated Visualization (ASV) to supply professionals with visual and interactive information that helps them to comprehend that data, however situating it where it is most relevant in its spatial context. Users are able to interact with the visualization by changing the attributes being displayed, comparing the overlaps of multiple fields, and extracting data, as a way to refine their search. The solutions being proposed in this work were tested against each other in comparable 2D and 3D interactive visualizations of the same data in a series of data-extraction assessments with users, as a means to validate the approaches. Results exposed a correctness-time trade-off between the interaction methods. The hand-based techniques (Hand Slider and Touch Lens) were the least error-prone, being near to half as error-inducing as the gaze-based method. Touch Lens also performed as the least time-consuming method, taking in average less than half of the average time required by the others. For the visualization methods tested, the 2D ray casts presented a higher usability score and lesser workload index than the 3D topology view, however exposing over two times the error ratio. Ultimately, this work exposes how AR can help users to have better performances in a decision-making context, particularly in EMC related tasks, while also furthering the research in the ASV field.
Concerning hardware testing, the activity of electronic and electrical devices can be highly affected by external sources, such as the frequency components of electromagnetic waves emitted in natural lightning, fluorescent lights, computers, and other similar devices [1]. As an example, radio receivers extract the information encoded in the intercepted waves. Any electromagnetic interference (EMI) received will cause the transmission to be either disrupted or misinterpreted, as exposed in Fig. 1
. Electromagnetic Compatibility (EMC) is the study concerned with the design of electronic systems such that interference from or to that system will be minimized, in order not to affect any of its surroundings. A system can be considered electromagnetically compatible with its environment if it satisfies three criteria [1]:
Fig. 1
Illustration of a simple EMI problem [2].
It does not cause interference with other systems.It is not susceptible to emissions from other systems.It does not cause interference with itself.Illustration of a simple EMI problem [2].This assessment usually starts by measuring the EMI radiated from – and conducted to – the electronic device being tested. This procedure exposes whether or not the aforementioned criteria are obeyed. In his book [2], Morgan exposes that it is even advantageous to submit particular hardware components to tests during the design process of such equipment. These criteria are often manually assessed by an expert, who needs to visually analyze the three-dimensional (3D) electromagnetic field (EMF) data.This problem may be redressed by using an Information Visualization paradigm that presents visual and interactive EMF data situated in the actual points in space it was measured from. According to Tatzgern [3], situated visualizations of data exposed in Augmented Reality (AR) have the capacity of significantly increasing the potential of problem assessment by making its information spatially context-aware and reducing user effort. Since decision-making is imperative in this context, such sort of tasks may present an elevation in performance (in time, cognitive and physical efforts) with this kind of AR visualization [4].Hereupon, this paper contributes with a review and a design that suggests the use of Augmented Situated Visualization (ASV), as well as novel interaction methods for extraction, while analyzing data in the EMC field. It is particularly aimed at easing decision-making according to Balleine’s definition [5]. Alongside an AR Optical See-Through Head-Mounted Display (HMD) to provide in situ information regarding the EM fields in the user vicinity, ASV can aid users to perceive the data without exhaustively exploring it or making a mental translation of it from a 2D map perspective. In such a way, this paper also contributes with a task-based study to assess user performance in augmented situated visualization of electromagnetic fields.
Related work
In scientific visualization, it is customary to address novel data visualization and rendering methods in a generic manner, in order not to focus on any particular platforms or display paradigms. Several works dealing with 3D visualization of data in different domains are presented in this fashion [6], [7], [8]. Contrary to these, the current work focuses on the Augmented Situated Visualization paradigm, with the aim to expose optimal ways to display and interact with EMC information located in situ.
EMC visualization systems
In 2018, Sato et al. [9] developed a method to measure and display the intensity of 3D EMFs in a tablet device. It was possible to visualize a 3D distribution of the field measured in real-time by a dosimeter using markers to position the data according to the actual device being measured. Their displayed data, however, is quite discrete, failing to expose the continuity of a 3D field, likely due to the imprecision of their measurement method. In the same year, Isrie et al. [10] demonstrated a data acquisition system that displays readings from a situated power sensor using the current GPS location in a heads up display. Their approach allows users to move along great distances and still have access to the data. The data displayed in their work is entirely made of 2D graphs, not properly situated in the surrounding area, which might cause users to misinterpret the precise location of the readings. Neither of these works presented any user studies.In their 2019 study, Rioult et al. [11] demonstrated an EMC scanning and visualization system aimed at providing fast readings in confined and even remote environments, using a compact portable device, depicted in Fig. 2
. The device is capable of measuring electromagnetic radiations as well as presenting them in AR, being situated in loco as 2D grids. Their work focuses solely on relatively small scale situations, requiring the environment to be manually scanned. Again, no user studies were presented to evaluate system usability. In order to address the aforementioned limitations, the present paper focuses on exposing EMC data in a more continuous and precise manner, maintaining the 3D topology of the fields radiated from the tested devices. Every piece of collected data will also be spatially situated around the hardware being analyzed, preserving the ASV paradigm.
Fig. 2
Examples of Rioult et al. [11] situated renderings of EMI data in AR.
Examples of Rioult et al. [11] situated renderings of EMI data in AR.
Situated visualization in AR
In 2016, a study by Willet et al. [12] brought to light the benefits, trade-offs, and some linguistic definitions regarding Information Visualization in AR. Willet revises different application works from the literature and semantically defines and groups them and their methods, besides presenting challenges, limitations, and possible benefits for each of their definitions. In a similar 2019 paper, Marques et al. [13] discussed situated visualization from a decision-making standpoint. Regarding its aid towards decision support systems, they made a literature analysis discussing the current areas of application, benefits, challenges, and opportunities. Marques exposes how they found situated visualization data in decision-making contexts to be more rapidly and intuitively explored than the counterparts tested, allowing for earlier detection of flaws and higher work productivity. Unlike both of these previous works, we present a novel ASV application and perform original user tests to validate the different visualization and interaction methods used.In recent years, multiple works tested the efficiency of data perception and analysis in Augmented Situated Visualization when compared to 2D passive and interactive interfaces in different scenarios. These studies (depicted in Fig. 3
exposed some advantages in tasks performed using the AR approach, including gains in accuracy [14], lower time taken [15] and lower cognitive effort levels [16]. All of the tests involved simple day-to-day tasks, such as picking a place to sit, buying groceries, and following a GPS route. The present work, on the other hand, puts ASV into an industry context, having tasks being performed on real data, extracted from state-of-the-art commercial equipment. Besides that, the multiple methods proposed are tested against each other in an all ASV context, in order to establish their effects on the task performance.
Fig. 3
Examples of Augmented Situated Visualizations. Left: Data regarding seats situated inside a classroom [14]. Center: Nutrition data situated on food products [15]. Right: GPS routing data on the actual streets [16].
Examples of Augmented Situated Visualizations. Left: Data regarding seats situated inside a classroom [14]. Center: Nutrition data situated on food products [15]. Right: GPS routing data on the actual streets [16].
MR interaction studies
Only very recently, several works came in favor of furthering the exploration of freehand gestures in AR and VR contexts [17], [18], [19]. Satriadi et al. [17] performed user studies comparing novel freehand interaction techniques in multiscale navigation tasks, such as pan and zoom in a digital map. Their results exposed a positive influence in user fatigue for their rate-based input mapping technique, however with a trade-off for task completion time. Kang et al. [18] compared object selection and translation in three techniques. Interviews with the subjects exposed that their direct touch and grab method provided them with a higher sense of enjoyment and discoverability. Unlike these, although superficially addressing object manipulation, in this work we conduct tests regarding data-extraction tasks, particularly in a practical use case scenario. Similar to the work of Kang et al., one of our proposed interaction methods uses an analogous direct touch metaphor.In relation to other forms of interaction, two very recent works make use of gazing in MR contexts [19], [20]. Lu et al. [20] compared three different forms of accessing content using eye and head movements. In their user tests, the eye-glance technique was preferred by the subjects in long monitoring tasks, while also exposing the lowest results for the time taken to acquire information. Meanwhile, Chen et al. [19] explored the use of gazing movements as a means to select between disambiguation options while the user’s hands are already being used. Their user study revealed the head movement to be the overall preferred technique. Following their line of thought, the current paper will test a similar gazing technique, however aimed at a refined data-extraction context, testing its precision.
Methodology
Room-scale EMC
Regarding large devices in their integrity, entire rooms may be required for testing, especially when the interference between multiple setups need to be analyzed. While conducting this sort of experiment, total isolation between the test space and the outside electromagnetic environment is recommended. According to [2], it is undesirable (and in some cases illegal) to radiate high field strengths across whole bands of frequencies when conducting radiated susceptibility testing. In this fashion, the use of screened chambers, as the one depicted in Fig. 4
-right, became widespread. They are commonly built as Faraday cages and lined with absorbing material inside, making them anechoic – i.e. rooms without any reflection of either sound or electromagnetic waves. A large antenna, transmitters, and receivers are used for the characterization of radiation patterns and EMC performance at different frequency bands. Additionally, circular platforms (turntables) are used to rotate the devices during testing, as to capture a 360-degree view.
Fig. 4
Left: Example of an antenna analyzed in the chamber. Right: Example of a chamber setup ready for a reading.
Left: Example of an antenna analyzed in the chamber. Right: Example of a chamber setup ready for a reading.Having access to the measurements taken by one of these fully equipped anechoic chambers, it is viable to render these EMC readings in AR, situating it around the tested devices. These readings are meant for users to detect any EMI that may cause the equipment to malfunction or affect other systems. The main advantage of such an application is to spatially expose exactly where the interference is being propagated from and into what other components. In a situated view, it will be possible to perceive the influence of multiple devices on each other in loco, either inside the chamber during tests or anywhere else these devices will be located at later on, by transporting the virtual renderings along with the real components.In a preliminary attempt to develop a prototype of the proposal, the physical room was scanned into a 3D mesh (Fig. 5
), with its points in space being used as anchors for the data to be placed upon. By loading this environment into an AR HMD with spatial tracking capabilities, the mesh can be matched with the real architecture of the current area, adjusting all virtual renderings into their proper positions. For the current project, the Microsoft HoloLens (First generation)1
was used both to scan the room and to display the data.
Fig. 5
Scanned mesh of the anechoic chamber.
Scanned mesh of the anechoic chamber.In a demonstration made to three experts in the EMC testing area, the feedback was outright positive. Users commended the visualization presented as being highly useful for analyzing real data, even at a commercial level. This demonstration served as a first evaluation of the concept. It also provided feedback from the expert community, allowing for an understanding of their needs, to be fulfilled in the upcoming steps.
Data visualization
Given the nature of the data measured by the aforementioned EMC chamber analysis environment, two visualization patterns were planned and implemented: 3D field topologies and 2D color-coded ray casts. These were designed according to the preliminary feedback given by EMC experts, who emphasized the need to expose the scalability and reach of the field radiation. This is very relevant since understanding where (into which components or devices) and how (with which intensity2
or frequency) the radiation gets to is one of the primary tasks in EMC testing.3D field topology. As to supply the user with a broad view of the field topology, a full three-dimensional mesh of the EMF is produced based on the input data set. After converting the spherical coordinate vectors into Cartesian points in space, this object is rendered by drawing a line between every point and its next neighbor. Each field presented in this view is read based on a frequency given in the data set. Multiple frequencies of the same field, or even different ones, can be compared by overlaying them, as can be seen in Fig. 6
, maintaining their topologies and intensities relative to one another. Beyond that, it is also possible to scale the fields by altering the logarithmic constant used to convert the points in space from the spherical vectors. This alteration slightly changes the topology of the field, while also making it reach farther away from the center, properly covering the points in space the actual EMF reaches. The loss in intensity given the distance can also be calculated in this context. By doing this, the user can perceive which devices or components the field hits, and with which intensity, allowing them to fiddle with the environment setup and avoid undesired EMI. The scaling interaction method implemented, as well as other transform manipulation techniques, will not be further addressed in this study as these did not take part in the user tests performed.
Fig. 6
Examples of 3D field topologies being compared in two different frequencies, depicted in contrast by the blue and orange colors.
Examples of 3D field topologies being compared in two different frequencies, depicted in contrast by the blue and orange colors.2D Color-coded ray cast. Besides the full three-dimensional fields, the EMC chamber analysis environment also provides faster readings of planar sections of it. These are simply degree-by-degree measurements of the intensities in a plane at a particular height of the EMF, in a given frequency. Since the goal is to interpret where the EMI collides with different objects, these vectors are rendered by drawing ray casts extending into infinity, as seen in Fig. 7
. The intensity of each vector is interpreted both as a distance from the center (as to avoid occluding the original tested device) and a color-scheme from least intense to most intense. The decay in signal strength is also conveyed in the loss of color intensity in each ray. They become more transparent the farther away they are from the tested object.
Fig. 7
Examples of 2D Color-coded ray casts. Colors indicate signal strength. In these particular images, they vary from turquoise (weaker) to yellow (stronger). Top: view from above the measured antenna. Bottom: view from below, facing forward. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Examples of 2D Color-coded ray casts. Colors indicate signal strength. In these particular images, they vary from turquoise (weaker) to yellow (stronger). Top: view from above the measured antenna. Bottom: view from below, facing forward. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Interaction methods
Given the primary task of analyzing the data to make assessments, having easy, precise, and uncluttered access to the values in a data set is fundamental [21]. In order to narrow down an optimal way for users to extract data from the visualizations, three interaction methods were developed, to be later on tested against each other in ASV-oriented data-extraction tests. Most are inspired by classic or recent techniques from the literature, all adapted into the current context. In part, given the current global pandemic context3
, we prioritize minimizing the user contact with different surfaces, such as controllers or screens.Hand Slider. Meant primarily for the color-coded ray cast visualization, the Hand Slider method requires users to select a single ray from the set, via a gaze and air-tap
4
combination, enabling a panel with an intensity measurement atop the line. Users are prompted to slide one of their hands sideways, moving the panel accordingly along the colored line selected (as in Fig. 8
), exposing the different intensity values, much like in a regular 2D UI slider. This is expected to provide a more refined reading from the desired point in space.
Fig. 8
Hand Slider data extraction method. The panel exposes the intensity reading at that point of the vector. Blue and red arrows indicate the possible movement directions, controlled by the user’s hand. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Hand Slider data extraction method. The panel exposes the intensity reading at that point of the vector. Blue and red arrows indicate the possible movement directions, controlled by the user’s hand. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)Gazing. Similar to the previous method, the first step is to select a single colored line from the visualization. Afterward, whichever point of the line the user gazes at will display its intensity measurement. By moving their gaze along the line, users will have access to the variations in its values, up to the moment where they perform another air-tap, saving the last value read and deselecting the line. Gazing is proposed as a quicker and hands-free method, which can also be used from a distance, as can be seen in Fig. 9
. This method is already widely and commercially used for object selection in HMDs, having been explored also in recent works [19], [20].
Fig. 9
Gazing data extraction method. User controls the point of reading by centering their vision at different points of the selected line.
Gazing data extraction method. User controls the point of reading by centering their vision at different points of the selected line.Touch Lens. Designed to be a more straight-forward metaphor, the Touch Lens method acts as if the user’s hand was a magnifying glass. By simply laying their hands over or on a point in space – virtually touching the data – users will have the reading from that point displayed on a hand-guided panel (seen in Fig. 10
). This was planned as being a more lifelike technique, not demanding any complex or novel gestures from the user, presenting a more localized view of the data. This method was partly inspired by the work of Wagner Filho et al. [22], where a metaphor of touching virtual data in a scatter plot is used to select it.
Fig. 10
Touch Lens data extraction method, both in the 2D (left and center) and 3D (right) data visualization. Users need to touch the virtual objects to obtain a measurement of that point.
Touch Lens data extraction method, both in the 2D (left and center) and 3D (right) data visualization. Users need to touch the virtual objects to obtain a measurement of that point.
User study
In order to assess the validity of the proposed solutions, two user tests were designed and performed: a comparison between three interaction methods and another of two visualization techniques. Each interaction and visualization method was evaluated against the other methods in its respective group. In both comparisons, users were asked to perform a series of data-extraction tasks in an ASV context. Their performances were measured regarding their task correctness, time, and steps taken during each trial.Moreover, a series of questions were used to assess a set of subjective aspects, such as the usability (using SUS [23], the System Usability Scale), the overall workload (using NASA TLX [24], the NASA Task Load Index), and any possible simulator sickness symptoms (using SSQ [25], the Simulator Sickness Questionnaire). Prior to any test, users were asked to sign a consent agreement, answer a few demographic questions, and go through a short tutorial on how to use the HMD equipment.
Hypotheses
The experiment was designed to test the validity of the three hypotheses described below.This hypothesis attempts to evaluate the distance users need to walk around their environment while performing data-extraction activities in an Augmented Situated Visualization context. For this, the number of steps each user takes during each condition will be counted using a pedometer. Thus, the lesser steps taken on average with a specific condition, the fewer users are likely to dislocate during such tasks while using that combination of methods.This hypothesis seeks to evaluate the time users need to spend while performing data-extraction activities in an ASV context. For this, the amount of time each user takes to complete each of the condition trials tested will be counted using a chronometer. The results for each of the different methods will be compared in order to establish any significant differences between them.This hypothesis aims to evaluate the correctness of the data extracted by users at different positions in space. This will be tested by defining tasks using multiple physical target objects in the environment, positioned in different places. An error ratio is computed as a relation between the user readings and the ground truth values for each trial. In this fashion, the smallest average error ratio for a specific condition will represent the combination of methods that obtain the highest correction.
ASV data-extraction tasks
For the tests, a prop antenna was used to situate real EMF data around it, as shown in Fig. 11
-left. In the experiment room, ten target objects were placed at different distances, orientations, and heights. One at a time, users were asked to measure the intensity of the field when it touched one of these targets, much like Fig. 11-right. All of their measurements were logged, as well as the time it took to complete the task and the number of steps they took around the room, as a way to assess their performances. Before and after every trial, subjects were prompted to take a seat in a default position in the room. Apart from that, users were free to walk and take as much time as they wanted to complete each task, as well as use the interaction method they were given whichever way they preferred. Prior to the set of tests for each technique, subjects went through three tutorial runs with it, as to allow them to learn how to operate it. As previously mentioned, the tests were divided into two comparisons, being separated in interaction and visualization methods. The specifics for each of these are described below.
Fig. 11
Test Setup. Left: Virtual colored vectors signalize the radiation emitted by the prop antenna at the center. Right: User measuring the signal strength hitting a target object (cardboard box).
Test Setup. Left: Virtual colored vectors signalize the radiation emitted by the prop antenna at the center. Right: User measuring the signal strength hitting a target object (cardboard box).Interaction test. In this part of the tests, the interaction method was used as an independent variable, maintaining the visualization fixed as the 2D colored ray casts. The three methods designed for data-extraction - Hand Slider (HaSl), Gazing (Gaze) and Touch Lens (2DTL) - were tested against each other. This test was designed to further develop the study of interaction methods into an ASV context. By proposing and testing three different techniques, the aim is to establish whether one of them will have outstanding performance when compared to the others in each of the hypotheses proposed.The order in which they were tested was alternated between users, as to avoid a learning bias. For each user, three of the ten targets were elected for each method, as to not repeat these between conditions. Each of these three targets was repeated three times to prevent outliers, using a Latin square to alternate the order. For each technique tested, three of the neglected targets were selected at random to be used in three tutorial runs (one per target) right before each condition was tested, as to allow the subject to learn how to operate it. After the nine trials for each method were concluded, subjects were asked to answer the aforementioned questionnaires before continuing to the next condition.Visualization test. Regarding the visualization part of the tests, the visualization technique was used as the independent variable, fixing the interaction method as the Touch Lens. The two visualization designs – 3D field topology (3DTL) and 2D color-coded ray casts (2DTL) – were tested against each other. This assessment was developed in order to further explore different visualization techniques into the situated AR context. The objective is to demonstrate whether one visualization presents a significant increase in performance when compared to the other for each of the hypotheses proposed, maintaining the same interaction method for both.Again, the order of the methods was alternated between users. Three of the targets were used for each method to prevent repetitions between conditions, averting learning bias. Each of the targets was repeated three times per condition in alternated orders, avoiding outliers. In this test, the remaining unused targets were used at random in three tutorial trials before each method was tested, so the subjects could learn how to operate them. After all trials in each condition, users answered the qualitative questionnaires, before moving on to the next method.
Results
Demography
Given the current global COVID-19 pandemic situation, only five subjects were able to take part in the tests. In this sense, despite treating the data with a statistical analysis given the number of samples collected, the limited population size requires the results to be interpreted as anecdotal evidence. The recommended social distancing and equipment sanitization procedures were followed as to not put any of the volunteers or testers at risk. Users were in an age group between 19 and 40, the average age being 28, with a standard deviation of 8.21. Two of the subjects identified themselves as females, the other three, males. Three of them claimed to either work or study in IT-related fields. All subjects were either enrolled in or had already graduated in a higher education program (two B.Sc., two M.Sc., and one Ph.D.). Regarding physical conditions, one subject was left-handed, three myopic, two astigmatic, and one color blind.When asked about their levels of familiarity with certain technologies, three subjects answered very high about video games (one average and one none), one very high for AR in smartphones (three average and one none), one average for VR HMDs (two low and two none), and one very high for AR HMDs (two low and three none).No significant correlation between age, gender, occupation, physical conditions, or previous familiarity and the performance measurements was found.
Objective results
Given the values measured, a Shapiro-Wilk test showed that most of the distributions were normal. Then, we analyzed them further in a series of paired t-tests to check the significance of the results.Walking Steps. As seen in Fig. 12
, a comparison of the number of steps taken in the three different interaction conditions was conducted. The same was done to the two different visualization conditions.
Fig. 12
Average steps taken by the participants, regarding each of the methods tested.
Average steps taken by the participants, regarding each of the methods tested.There was a significant difference in the number of steps taken comparing the Hand Slider method (M=10.27, SD=4.69) with the Gaze method (M=8.36, SD=5.78);
and with the Touch Lens method (M=8.56, SD=3.34);
. This suggests a considerable decrease in the self displacement users need to perform in order to accomplish the tests with the Gaze method (18.61%) and with the Touch Lens method (16.66%).Regarding the visualization methods exposed in Section 3.2, although we observe a substantial increase in the mean number of steps taken (23.63% from 2D to 3D), the results could not demonstrate a significant effect .Time. In relation to the amount of time it took for the subjects to finish the trials (Fig. 13
), a significant difference was found in a paired t-test when comparing the Hand Slider method (M=36.18, SD=31.25) with the Gaze method (M=20.93, SD=21.97) conditions;
as well as with the Touch Lens method (M=12.24, SD=7.57);
. Taking the Hand Slider as the baseline, these results indicate a considerable decrease (42.1%) in the time users need to spend to accomplish the task with the Gaze method, along with an even greater decrease (66.1%) for the Touch Lens method.
Fig. 13
Average time taken by the participants, regarding each of the methods tested.
Average time taken by the participants, regarding each of the methods tested.Once more, albeit exposing a substantial increase in time taken means (87.6%), the comparison between the 2D and 3D visualization methods was not statistically significant . The lack of significance is probably due to the high variance between subjects in terms of 3D familiarity combined with the relatively small number of subjects.Correctness. Based on the intensity measurements given by the subjects during the trials, an error ratio is computed as a relation between the user reading and the ground truth value. This ground truth value was measured as the average result of several ad hoc test runs for each target location. In this sense, smaller error values are considered more accurate answers. The average error committed by users during each method trial is available in Fig. 14
.
Fig. 14
Average intensity error based on measurements taken by the participants, regarding each of the methods tested.
Average intensity error based on measurements taken by the participants, regarding each of the methods tested.In a paired t-test, a significant difference was found in the error ratio committed by the users regarding the Hand Slider method (M=8.84, SD=17.51) and the Gaze method (M=17.93, SD=22.87) conditions;
. This difference implies a considerable (102.9%) increase in the error ratio users committed in order to accomplish the tests with the Gaze method.Regarding the visualization comparison, there was a significant effect on the errors made by the users regarding the 2D Touch Lens method (M=10.10, SD=7.05) and the 3D Touch Lens method (M=4.61, SD=2.96) conditions;
. This indicates a considerable (54.38%) decrease in the error ratio to accomplish the tests in the 3D topography visualizations.
Subjective results
Usability. After the set of trials for each of the conditions tested, a series of subjective questionnaires were applied to the subjects. SUS [23] was used as a way to measure the usability of each method. In its score-based analysis5
, the results are available in Fig. 15
. In short, all techniques were deemed either acceptable or marginal, with all three interaction methods being rated above average in the 2D visualization.
Fig. 15
System Usability Score [23] for each of the conditions tested.
System Usability Score [23] for each of the conditions tested.Discomfort. The data collected for simulation sickness [25] revealed that there was not a high discomfort observed in any of the tested conditions, as depicted in Fig. 16
. Three out of the four conditions tested presented negligible symptom results [26]. The Gazing method, although having reached the level of significant symptoms (10.47 points, SD = 13.32), scored far below the level of concern, since most of the subjects reported minimal (20%) or no symptoms at all (40%).
Fig. 16
Sickness score for each of the conditions tested, based on the Simulator Sickness Questionnaire [25], [26].
Sickness score for each of the conditions tested, based on the Simulator Sickness Questionnaire [25], [26].Workload. With the objective of measuring different types of effort exerted by the subjects, the NASA TLX test [24] was applied. Considering the results exposed in Fig. 17
, it is notable that all three interaction methods (using the 2D visualization) were perceived as less demanding than the 3D visualization technique. Besides, the Touch Lens interaction method in its 2D view was deemed as the least demanding in all workload aspects.
Fig. 17
NASA Task Load Index [24] scores for each of the conditions tested.
NASA Task Load Index [24] scores for each of the conditions tested.Additional feedback. In a more general assessment, after each test condition, users were asked to rate their agreement with the following six statements on a 5-point scale. The results are color-mapped in Fig. 18
:
Fig. 18
User responses to general questions regarding each of the conditions tested.
In the application world, I had a sense that the virtual elements were there with me.It was easy for me to navigate through the data.It was easy to interact with the system.It was easy to find the required information.It was easy to remember how to do what I was asked.Using the technique was comfortable.User responses to general questions regarding each of the conditions tested.When directly asked to rank the interaction methods tested from most to least favorite, 60% of the subjects chose to place the Gazing method in first, with the other 40% choosing Touch Lens as their preferred interaction method. Regarding the visualization techniques, 60% said they preferred the 2D ray casts over the 3D topology. In addition, two subjects complained about the hand-tracking capabilities of the device, claiming it lost track multiple times and that holding out their arm and finger for the HMD to recognize them was annoying over the course of the tests.
Discussion
Based on the results exposed, we make here a short analysis of the possible meanings behind them. They are split below between the Interaction and Visualization comparisons as a way of discerning the contributions for these two fields.
Interaction
Considering the interaction portion of the hypotheses presented, all three were shown to achieve significant differences in the results, partially proving all of them. In a short-sighted view, it would be possible to rank the three interaction methods according to each of the performance variables taken. This would leave Hand Slider as the best technique regarding the correctness, in the sense that it presented a significantly smaller error rate than the others, in part confirming hypothesis . Taking into account the manual refinement precision tool it offers, this is not a surprising result. It is relevant to note that it also ranked as the most time-consuming among the interaction techniques, which might indicate a precision-time trade-off.In the same line of thought, Touch Lens ranked as the least time-demanding method, with statistically significant results, validating the interaction part of hypothesis . Since users quickly realized that they only had to walk to the target and place their hands there in order to perform a reading, their cognitive load might have been diminished during these trials, which might explain the decrease in time. This method also presented favorable results for intensity correctness (with the second smallest error rate), steps taken (second fewest average number of steps), and workload (smallest Task Load Index) which are highly compelling results in its favor. Among the techniques, the touching metaphor was also the most familiar to the subjects, given its similarity to regular human behaviors, which may explain the results.The Gazing method came in first in requiring the least amount of physical displacement by the user, supporting hypothesis . This makes a case for it being the overall most advantageous way of interacting with data from a distance, especially since it obtained both the best SUS score and overall user preference among the interaction methods. The actual number of steps taken during these trials is arguably due to data occlusion, either by physical objects or other pieces of data. This could be mitigated with different techniques that minimize occlusion, such as making the data dynamically adaptive to the user position [14], or let the user interact with the occlusion, such as filtering through walls or disabling it altogether [27]. Although a fast and low demanding technique, gazing scored very poorly in its correctness assessment, which suggests that it is only viable for quick measurements, that do not prioritize accuracy. Another possible concern is how high its sickness score was when compared to all others, which might indicate a slight tendency to nausea from excessive head movement.
Visualization
It is important to note that adding a third dimension to a problem is expected to increase its difficulty. In relation to that, the results expressed in the 2D ray casts and the 3D topology comparison are much as expected, despite the number of steps and amount of time not having reached statistical significance. Nevertheless, we strongly believe that a larger set of users would suffice to expose the significance of these correlations, proving hypotheses and . Considering the correctness results (Fig. 14), on the other hand, a very significant effect of the visualization method can be observed, supporting .Regarding the lack of a third dimension in the ray casts view, we believe that allowing the user to manually segment the 3D topologies into 2D planar cross-sections would bring out the best in both visualizations [28]. This would both legitimize the use of the planar ray casts as a fully spatial visualization approach and make the topology view to provide better usability and require a lesser workload demand, besides allowing for the use of the other interaction techniques. Despite the very low usability score and very high Task Load Index, 40% of the subjects still claimed to prefer the 3D topology view over its 2D counterpart. Given the correctness assessment, the 3D topology view clearly showed its value surpassing its two-dimensional contestant, ranking as the most precise visualization when being analyzed with the same interaction technique (Touch Lens).
Limitations
Despite having treated the data with a statistical analysis according to the number of samples collected in order to better understand it, all results exposed in this work should only be interpreted as anecdotal evidence. Given the current COVID-19 pandemic, the number of test participants had to be limited to a small population size as to comply with the local health and safety guidelines. Broader tests should be implemented in the future.Regarding the 3D visualization, non-expert subjects in our study might have been misled into interpreting the topologies as a volume, due to their virtual mesh. Electromagnetic fields are continuously broadcasted in all directions, however with different intensities, which could also be showcased as vectors. This sense is better expressed in the 2D ray casts view. The 3D topology into 2D planar field segmentation is arguably a reasonable way to rectify this limitation.As to properly evaluate the use of ASV in the EMC field, a formal specialist user test is still required. The next step in this sense would be to assess the proposed visualizations with experts with EMC backgrounds, in a set of interference avoidance tasks. This test should be held inside an anechoic chamber, where subjects will analyze the EMI data between two different antennas and move them around to get an optimal placement, minimizing interference.
Conclusion
This work presented the proposal, development and analysis of ASV interaction, and visualization methods aimed at aiding decision-making in an EMC testing context. The use case application is intended for helping expert users to analyze electromagnetic fields and EMC data in general. Using an AR HMD, users were able to visualize the spatial data read by high-level industry standard EMC equipment in a series of task-based sessions, having their performances assessed in multiple ways.The approaches presented in this paper demonstrated to have different effects on data-extraction tasks. The least error-prone and effort demanding (in time and user displacement) methods were exposed, suggesting that specific techniques may be used depending on the task priority. Suggestions have also been made as to proceed with further design by combining different methods and testing the application in real EMC assessments, with actual field experts as users. Furthermore, this was a relevant step forward in the Situated Visualization research, as there still is a gap in commercial and industrial applications that needs to be filled, and the proper interaction and visualization techniques need to be in place as to support its reach into the general public.In order to legitimize the use of ASV in the EMC field, specialist user studies are required. One future step is to assess the proposed visualizations with EMC experts. Based on our findings, the combination of the two visualization methods into a field segmentation tool is suggested to be implemented and tested against traditional allocentric views of the same data. We also believe the gazing technique does not have enough advantages for it to be included in the tests.
CRediT authorship contribution statement
Renan Guarese: Conceptualization, Methodology, Software, Writing - original draft, Visualization, Supervision. Pererik Andreasson: Data curation, Supervision. Emil Nilsson: Data curation, Supervision. Anderson Maciel: Conceptualization, Writing - review & editing, Supervision.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Authors: Pedro Hermosilla; Jorge Estrada; Victor Guallar; Timo Ropinski; Alvar Vinacua; Pere-Pau Vazquez Journal: IEEE Trans Vis Comput Graph Date: 2017-01 Impact factor: 4.579
Authors: Marc Rautenhaus; Michael Bottinger; Stephan Siemen; Robert Hoffman; Robert M Kirby; Mahsa Mirzargar; Niklas Rober; Rudiger Westermann Journal: IEEE Trans Vis Comput Graph Date: 2017-12-04 Impact factor: 4.579