Literature DB >> 36174027

Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke.

E L Isenstein1,2,3, T Waz1, A LoPrete3,4,5, Y Hernandez2,6, E J Knight2,7, A Busza2,8, D Tadin1,2,3,9.   

Abstract

The acquisition of sensory information about the world is a dynamic and interactive experience, yet the majority of sensory research focuses on perception without action and is conducted with participants who are passive observers with very limited control over their environment. This approach allows for highly controlled, repeatable experiments and has led to major advances in our understanding of basic sensory processing. Typical human perceptual experiences, however, are far more complex than conventional action-perception experiments and often involve bi-directional interactions between perception and action. Innovations in virtual reality (VR) technology offer an approach to close this notable disconnect between perceptual experiences and experiments. VR experiments can be conducted with a high level of empirical control while also allowing for movement and agency as well as controlled naturalistic environments. New VR technology also permits tracking of fine hand movements, allowing for seamless empirical integration of perception and action. Here, we used VR to assess how multisensory information and cognitive demands affect hand movements while reaching for virtual targets. First, we manipulated the visibility of the reaching hand to uncouple vision and proprioception in a task measuring accuracy while reaching toward a virtual target (n = 20, healthy young adults). The results, which as expected revealed multisensory facilitation, provided a rapid and a highly sensitive measure of isolated proprioceptive accuracy. In the second experiment, we presented the virtual target only briefly and showed that VR can be used as an efficient and robust measurement of spatial memory (n = 18, healthy young adults). Finally, to assess the feasibility of using VR to study perception and action in populations with physical disabilities, we showed that the results from the visual-proprioceptive task generalize to two patients with recent cerebellar stroke. Overall, we show that VR coupled with hand-tracking offers an efficient and adaptable way to study human perception and action.

Entities:  

Mesh:

Year:  2022        PMID: 36174027      PMCID: PMC9522266          DOI: 10.1371/journal.pone.0275220

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Head-mounted virtual reality (VR) provides a multisensory and engaging experience by immersing the user in a 360° computer-generated environment. This technology affords an opportunity to change the way that perception and action research is conducted, bringing the potential for tightly controlled yet naturalistic experiments that can be conducted while the participant is in motion. Historically, action-perception research has generally involved relatively rigid experimental setups where simple stimuli are presented, with participants indicating their perception with a button press. While this framework has led to major functional and mechanistic advances in our understanding of how the brain processes sensory stimuli, it often treats perception as a passive, unidirectional process and belies the complex reciprocity of the action-perception loop [1]. These experiments typically employ simple, two-dimensional stimuli and are conducted in quiet, confined spaces by stationary participants to achieve a high degree of experimental control [2]. Further, many studies involving movement tend to be restricted by a small number of reaching target locations [3-5] or the movement is limited to small actions such as pressing a button [6-8]. These limitations of typical perception and action experiments are motivating an effort to develop more active, naturalistic experiments [9-14]. The goal is to capture the dynamic, bidirectional richness and complexity of everyday experiences. The promise of head-mounted VR displays is that they will allow us to conduct much needed naturalistic and interactive studies of human perception while giving up little, if any, of the experimental control that is the cornerstone of empirical perception research. With VR, we can undertake increasingly complex questions about perception while also applying the findings to more diverse populations in real-life contexts. Neuroimaging research has shown that human brains are more attuned to complex, naturalistic stimuli over those that are simple and artificial [15]. VR technology can be customized to present three-dimensional images [16-18], create the illusion of distant sounds [19,20], and provide haptic feedback to create engaging, multimodal stimuli that represent the lived experiences of research participants [21-23]. VR can also incorporate a high degree of control in a realistic and multisensory environment, ideal for high quality basic research. For example, a recent study used VR in conjunction with eye-tracking to progressively remove the color from peripheral vision during free-viewing of immersive 360° videos, dramatically revealing the limitations of human color perception in the visual periphery [24]. This technology has also been used to assess audiovisual speech perception in children [25] and verticality perception in patients with symptoms of dizziness [26]. VR environments can also be constructed to be responsive to user input, allowing participants to behave closer to how they would in a real-world situation [27-29]. This sense of ‘presence’, which captures the feeling that a user is truly there in virtual world, results from the immersion the user feels as a result of realistic multisensory illusions [30,31]. This feeling also provides a sense of agency over the environment, increases task engagement, and can affect cognition, social behavior, and memory [1,32,33]. Naturalistic stimuli also capture and maintain attention more authentically than simple two-dimensional stimuli because they tap into more sophisticated top-down attention pathways that incorporate context, prior knowledge, and goals rather than purely feature-based attention [34]. A recent benefit of head-mounted VR lies in its ability to easily capture data from a moving participant, allowing perception and action to be studied simultaneously during active, full-body tasks. As most research on perception is conducted with a stationary participant, this ability to concurrently examine how people physically interact with and respond to their environment provides new opportunities to study the action-perception loop. Further, some VR headsets are able to track the position of the hands in real time, including precise finger movements. One such device, the Oculus Quest (Meta, USA) has < 1 cm tracking accuracy in good environmental conditions [35]. The implications of simple and effortless body tracking technology are considerable; in particular, experiments studying human movement, posture, and proprioception in clinical populations stand to benefit from this technology. Crucially, the portability of VR headsets means that research can occur in places that cannot accommodate traditional lab equipment, such as a hospital room or out in the community. Larger groups of more diverse populations can be tested because conditions can be replicated with very high fidelity regardless of the participant’s location or circumstances. Commercially available VR headsets are also impressively accessible in terms of cost, portability, and ease of use. As a portable “lab in the box,” a headset has the potential to increase sample sizes, reach under-studied populations, and promote long-distance scientific collaborations. One area of VR research that has received a great deal of attention is in stroke rehabilitation, with a specific focus on visual-motor coordination and perception. Over 100 randomized control trials have been conducted testing VR technology with people recovering from stroke, with the majority published in the past five years. There is substantial diversity in the attributes of the investigations: studies have been conducted in the home [36-38], in conjunction with telehealth resources [39-41], and in patients with both acute [42-44] and chronic [45-47] stroke. The majority of work on motor rehabilitation only assessed gross motor skills (e.g., reaching) by tracking the position of the handheld controller [44,48] or tracked finger motion by using supplemental specialty equipment [49,50]. However, persistent fine motor dysfunction is a common consequence of stroke and dramatically affects activities of daily living [51,52], requiring rehabilitative techniques that target fine motor skills. Hand-tracking technology built into VR offers a promising avenue to examine the speed, accuracy, and consistency of fine motor movements as baseline assessments and/or measures of rehabilitative progress. To assess the feasibility of using VR technology to study fine motor skills in both healthy and clinical populations, the present study employed hand-tracking to measure accuracy in simple reaching tasks while varying multisensory and cognitive demands. This study was inspired by previous tasks that used mirrors [53] or tablets [54] to manipulate hand or target visibility during reaching. Two different experiments were conducted with healthy young adults: one assessed visual-proprioceptive integration versus isolated proprioception, and the other tested spatial memory. These two tasks were selected to examine the sensitivity of VR-based reaching assessment under different sensory and cognitive conditions. The visual-proprioceptive task was also completed by two individuals with recent cerebellar strokes to evaluate the practicality of successfully collecting this data with individuals with motor or vision difficulties. Overall, the goal of this study was to evaluate whether VR-based hand tracking can serve as a sensitive measure of differences in fine motor movements across various conditions in individuals with and without visuo-motor disabilities.

Materials and methods

For Experiments 1 and 2, healthy young adult participants were recruited from the University of Rochester and the greater Rochester community. For experiment 3, two patients rehabilitating from cerebellar strokes at Strong Memorial Hospital (Rochester, NY) were recruited. Each healthy participant completed the Edinburgh Handedness Inventory [55] and a demographic survey. All participants had normal or corrected to normal hearing and all healthy participants had normal or corrected to normal vision. Written informed consent was obtained from all participants as approved by University of Rochester Research Subjects Review Board. The virtual reality experiments were conducted using a 1st generation head-mounted Oculus Quest running the latest OS/firmware at the time of testing. UNITY version 2019.4.2f was used to create the experiments. SideQuest, a free 3rd party software, was used with the scrcpy plugin (https://github.com/Genymobile/scrcpy) so experimenters could monitor what the participant saw on the headset during the experiment. Healthy participants were seated in the experiment room on a stationary chair whereas participants with recent stroke conducted the experiment in a stationary chair next to their hospital bed. All experiments were conducted with no objects in front of the participants in rooms with good lighting to optimize the environment for hand-tracking. All participants were given a brief introduction on how to navigate the virtual reality setup. Participants were instructed to keep their shoulders against the back of the chair during the entire experiment and were monitored continuously and given reminders as necessary. The Oculus Guardian system, intended to prevent actively moving users from exiting the designated ‘safe’ area by providing a visual warning when the user approaches the periphery of the Guardian area, was disabled to avoid disrupting the experiment. All participants were monitored continuously to maintain a safe experience. Participants were told to put the headset on and to adjust the straps so that it was comfortable. Those wearing corrective lenses were able to wear them under the headset. Help was offered if requested. Participants were also shown the inter-pupillary distance slider at the bottom of the headset, and told to move it around until they found their “sweet spot,” where the images/text were clearest and most legible. The inter-pupillary distance on the Quest headset ranges from 58mm–72mm. This wide range allowed participants to adjust the lens spacing for a comfortable viewing experience in VR. Once each experiment loaded, participants viewed a grey, featureless room. Instructions appeared directly in front of them, and rendered representations of each of their hands appeared. These hand renderings moved and articulated in real-time corresponding to the participant’s real hand movements. Participants were asked to indicate which was their dominant hand; once a hand was selected, only that hand was visible and functional for the remainder of the experiment. To ensure the reaching distance was appropriate to the size and motor function of each individual, participants extended their dominant arm to calibrate the reaching distance before each experiment. The distance from the end of the extended arm to the headset was used as the distance of the radius on which target stimuli would appear. Each healthy participant completed one practice session and two separate experiments, the Visible/Invisible Hand experiment and the Memory Delay experiment (see supporting information S1 and S2 Videos). Stroke patients completed one practice session and only the Visible/Invisible Hand experiment to reduce fatigue and avoid possible confounding cognitive factors in the Memory Delay experiment. In each trial of the practice session, a pink sphere (target) appeared along an invisible 60-degree arc at arm’s length in front of the participant; the radius of this arc was set by the extended arm in the experiment’s introduction and the arc extended indefinitely vertically. Using their dominant hand, participants were instructed to touch the target sphere with their index finger. Each trial ended when the fingertip passed through the arc; the target would then disappear and the next trial would begin regardless of the accuracy of the reach. They were then instructed to move their hand back to touch a cube that appeared just in front of their chest. The cube served as a reset point that appeared once the target sphere disappeared. Once the index finger touched the cube, the cube would disappear and after 500 ms a new target sphere would appear randomly along the 60-degree arc. The program specifically recorded the difference in degrees between where the tip of the index finger passed through the arc and the center of the target, accounting for both horizontal and vertical error. Participants were encouraged to take breaks by resting their hands on their lap to avoid fatigue. Participants completed practice trials until they felt comfortable with the motions and the experimenter deemed them ready to begin the experiments. The two experimental conditions retained the same basic structure as the practice session, but with two sets of key modifications.

Experiment 1 –Visible/Invisible Hand

This experiment used the same introduction and structure as the practice session, but in 50% of the trials the rendering of the dominant hand became invisible during the reaching phase (Fig 1A and 1B). In these invisible hand trials, the participant had no visible feedback on where their hand was while they were reaching for the target, forcing high reliance on proprioception. The hand reappeared only after the reach movement was completed. Each participant completed 10 practice trials and 200 experimental trials. Experimental trials were split into 100 hand visible randomly interspersed with 100 hand invisible trials. For examples of both types of trials, see supporting information S1 Video. The experiment took between 5–6 minutes to complete in healthy adults.
Fig 1

Task and stimuli in the Visible/Invisible Hand experiment.

Each trial starts with a green cube appearing in front of the participant’s chest. After the cube is touched, the cube disappears and a pink target sphere appears along a 60-degree arc in front of the participant at arm’s length. When participant’s index finger passes through the arc, it explodes and the trial ends. A new cube appears to begin the new trial. A) In the visible hand condition, the rendering of the hand is visible during the entire trial. B) In the invisible hand condition, the rendering of the hand is invisible during the reach phase. That is, the hand rendering disappeared when the cube was touched, reappearing only at the completion of the reach movement. For a video of this experiment, see supporting information S1 Video.

Task and stimuli in the Visible/Invisible Hand experiment.

Each trial starts with a green cube appearing in front of the participant’s chest. After the cube is touched, the cube disappears and a pink target sphere appears along a 60-degree arc in front of the participant at arm’s length. When participant’s index finger passes through the arc, it explodes and the trial ends. A new cube appears to begin the new trial. A) In the visible hand condition, the rendering of the hand is visible during the entire trial. B) In the invisible hand condition, the rendering of the hand is invisible during the reach phase. That is, the hand rendering disappeared when the cube was touched, reappearing only at the completion of the reach movement. For a video of this experiment, see supporting information S1 Video.

Experiment 2 –Memory Delay

This experiment used a similar introduction and structure as the practice session, but in 50% of trials we imposed a memory demand on the reaching task (Fig 2). 500 ms after the participant touched the reset cube, the target would appear and be followed by a tone 1200 ms later. The tone had a frequency of 440 Hz and a duration of 100 ms, and was set at a volume comfortably audible for each individual participant. The tone was presented bilaterally and acted as a cue for the participant to reach for the target location. In this experiment, the hand remained visible for the entire duration of the experiment. The critical manipulation was the visibility of the target before the reach. In 50% of the trials the target sphere would remain visible for the entire duration of the trial (Fig 2A). In the remaining 50% of the trials, the target sphere would only appear for 200 ms then disappear for the remaining 1000ms before the tone and remain invisible during the subsequent reach movement (Fig 2B), requiring the use of spatial memory to guide the reach. This approach mirrors established memory-guided reaching tasks by introducing a one second delay [56,57]. As in Experiment 1, participants completed 10 practice trials and 200 experimental trials. The program randomly interspersed the 100 trials in which the target sphere remained visible and the 100 trials in which the target sphere disappeared. For examples of both types of trials, see supporting information S2 Video. The experiment took 8–9 minutes to complete.
Fig 2

Task and stimuli in the Memory Delay experiment.

Each trial starts with a green cube appearing in front of the participant’s chest. 500 ms after the cube is touched, the pink target sphere appears along a 60-degree arc at arm’s length. 1200 ms later, a tone indicates that a participant is free to reach out to the target. When participant’s index finger passes through the arc, it explodes and the trial ends. A new cube appears to begin the new trial. A) In the standard condition, the target remained visible for the entire trial. B) In the memory delay condition, the target disappeared 200 ms after its appearance, remaining invisible for the 1000ms before the tone was played and during the subsequent reach movement. For a video of this experiment, see supporting information S2 Video.

Task and stimuli in the Memory Delay experiment.

Each trial starts with a green cube appearing in front of the participant’s chest. 500 ms after the cube is touched, the pink target sphere appears along a 60-degree arc at arm’s length. 1200 ms later, a tone indicates that a participant is free to reach out to the target. When participant’s index finger passes through the arc, it explodes and the trial ends. A new cube appears to begin the new trial. A) In the standard condition, the target remained visible for the entire trial. B) In the memory delay condition, the target disappeared 200 ms after its appearance, remaining invisible for the 1000ms before the tone was played and during the subsequent reach movement. For a video of this experiment, see supporting information S2 Video.

Experiment 3 –Visible/Invisible Hand after cerebellar stroke

This experiment was identical to Experiment 1, except that the participants included two patients with recent cerebellar stroke. The only difference was that patients took between 15 and 20 minutes to complete the experiment.

Statistical analysis

All experiments measured reaching accuracy of the dominant index finger by calculating the difference in degrees between the center of the target sphere and where the tip of the index finger passed through any point along the 60-degree arc where the target could appear. This accuracy was compared between the two conditions of each experiment. In addition, each individual’s precision was calculated as the standard deviation of their endpoint accuracy in Experiments 1 and 2. In Experiments 1 and 3, the reaching time–defined as the amount of time between when the target appeared and when the participant’s index finger crossed the arc—was also recorded. This data is not available for Experiment 2. In all experiments, reaching accuracy was the main outcome measure as it has the greatest potential clinical significance and effect on quality of life and independence. Statistical testing was done with SPSS software version 28 (IBM Corp, Armonk, NY, USA) or MATLAB 2021a software (Mathworks, Natick, MA, USA). Shapiro-Wilk tests of normality were conducted on reaching time, accuracy, and precision in each condition in all experiments, with one or more conditions in each experiment determined as non-normally distributed. Related-Samples Wilcoxon Signed Rank Tests were used in Experiments 1 and 2, as statistics were assessed on a group level. In Experiment 3, Independent Samples Mann-Whitney U Tests were conducted because statistics were assessed on an individual level. In Experiments 1 and 3, outliers of > 3 standard deviations away from each individual’s mean were removed from the reaching time data. In Experiment 1, an average of 2.05 ± 1.00 outlier trials in the visible condition and 2.30 ± 1.26 trials in the invisible condition were removed per participant. In Experiment 3, 7 outlier trials in the visible condition and 2 in the invisible condition were removed for patient 1, and 6 outlier trials in the visible condition and 8 in the invisible condition were removed for patient 2. In all three experiments, reaching accuracy was also assessed including data from only the first 25 trials to test whether our approach is sensitive enough to detect the main results in substantially abbreviated versions of our experiments. Slopes of the change in reaching accuracy over time across conditions were normal across experiments; one sample t tests were conducted to assess whether the slope of the average error differed from zero. No power analyses were conducted prior to data collection because no suitable previous work was available to estimate the sample size needed.

Results

Twenty participants, 8 male and 12 female, participated in Experiment 1, with a mean age of 23.4 (st. dev. = 2.6). Eighteen of these participants, 8 male and 10 female, also participated in Experiment 2, with a mean age of 23.6 (st. dev. = 2.7). Information on the two patients rehabilitating from recent cerebellar stroke is found in Table 1. All participants, including patients, were right-handed, and reported no developmental or psychiatric disorders.
Table 1

Descriptions of patients included in recent stroke cohort.

Age (years)Time since stroke at time of participation (days)Type of strokeLevel of motor/visual disability at time of participation
Patient 17210• Large ischemic infarct in right cerebellum• No muscular weakness, severe ataxia in right arm/leg• Reported horizontal diplopia
Patient 2751• Multifocal ischemic strokes, including large infarct in right cerebellum and right occipital lobe• No muscular weakness, mild ataxia in right arm/leg• Right homonymous hemianopia
The virtual hand experiment elucidated a clear, robust difference in the reaching accuracy when the virtual rendering of the hand was visible compared to when it was invisible (Fig 3A and 3B). We found a significant difference between the average reaching error in visible (2.24° ± .25°) and invisible (3.80° ± .19°) hand conditions (T = 204.00, z = 3.70, p < .001; Fig 3A). This difference was observed in a large majority of individual participants (Fig 3B). There was also a significant difference between the average reaching precision in visible (1.58° ± .76°) and invisible (1.93° ± .69°) hand conditions (T = 160.00, z = 2.05, p = .04). Precision and accuracy were shown to be positively correlated for both the visible (r(18) = .708, p < .01) and invisible (r(18) = .49, p = .02) hand conditions. There was no significant difference between the average reaching times in visible (625 ms ± 105 ms) and invisible (617 ms ± 160 ms) hand conditions (T = 87.00, z = -.67, p = .50).
Fig 3

Results of the Visible/Invisible Hand experiment in healthy adults.

(A) Group-level average reaching error as a function of hand visibility in all 100 trials. Yellow: Visible-hand condition. Blue: Invisible-hand condition. Error bars denote the standard error of the mean. (B) Results for 20 individual participants as a function of hand visibility in all 100 trials. (C) Group-level average reaching error as a function of hand visibility in the first 25 trials. (D) Results for 20 individual participants as a function of hand visibility in the first 25 trials.

Results of the Visible/Invisible Hand experiment in healthy adults.

(A) Group-level average reaching error as a function of hand visibility in all 100 trials. Yellow: Visible-hand condition. Blue: Invisible-hand condition. Error bars denote the standard error of the mean. (B) Results for 20 individual participants as a function of hand visibility in all 100 trials. (C) Group-level average reaching error as a function of hand visibility in the first 25 trials. (D) Results for 20 individual participants as a function of hand visibility in the first 25 trials. To determine the sensitivity of this experiment at capturing differences in reaching accuracy, we repeated these statistical tests with only the first 25 trials of each condition. The difference between the visible (2.44° ± .37°) and invisible (3.39° ± .52°) hand reaching accuracy remained significant (T = 199.00, z = 3.51, p < .001). This finding, displayed in Fig 3C and 3D, confirms that the length of this experiment could be reduced to a fraction of the original length and still provide the same reliable, highly significant result in healthy adults. Participant level data is shown in Fig 4 to demonstrate the robust consistency of this data across participants and across the duration of the experiment.
Fig 4

Reaching errors for each individual trial in 20 healthy adult participants in the Visible/Invisible Hand experiment.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Visible-hand condition. Blue: Invisible-hand condition.

Reaching errors for each individual trial in 20 healthy adult participants in the Visible/Invisible Hand experiment.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Visible-hand condition. Blue: Invisible-hand condition. To measure the stability of task performance over time and detect possible learning or fatigue effects, we assessed whether reaching accuracy results in either condition changed throughout the course of the experiment. On a group level, the slope of the average error was not significantly different from zero in both the visible hand (m = .002, std dev = .01, t19 = .93, p = .36) and the invisible hand condition (m = -.0005, std dev = .01, t19 = -.27, p = .79). Evidently, performance remained steady over the course of the full experiment, implying that there was no measurable learning or fatigue effects.

Experiment 2—Memory Delay

The results of the Memory Delay experiment followed the same pattern as the Visible/Invisible Hand experiment, though results were slightly less robust. We found a significant difference between the average reaching accuracy error in the non-delayed standard condition (2.28° ± .27°) and delayed target condition (3.45° ± .32°; (T = 170.00, z = 3.68, p < .001; Fig 5A). Individual participant data is shown both as averages (Fig 5B) and with all trials shown (Fig 6). There was a significant difference between the average reaching precision in standard (1.47° ± .70°) and delayed target (3.36° ± 3.54°) conditions (T = 155.00, z = 3.03, p < .01). Precision and accuracy were shown to be positively correlated for both the standard (r(16) = .48, p = .04) and the delayed (r(16) = .69, p < .01) conditions.
Fig 5

Results of the Memory Delay experiment in healthy adults.

(A) Group-level average reaching error as a function of memory demand in all 100 trials. Yellow: Non-delayed standard condition. Blue: Delayed condition. Error bars denote the standard error of the mean. (B) Results for 18 individual participants as a function of memory demand in all 100 trials. (C) Group-level average reaching error as a function of memory demand in the first 25 trials. (B) Results for 18 individual participants as a function of memory demand the first 25 trials.

Fig 6

Reaching errors for each individual trial in 18 healthy adult participants in the Memory Delay experiment.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Non-delayed standard condition. Blue: Delayed condition.

Results of the Memory Delay experiment in healthy adults.

(A) Group-level average reaching error as a function of memory demand in all 100 trials. Yellow: Non-delayed standard condition. Blue: Delayed condition. Error bars denote the standard error of the mean. (B) Results for 18 individual participants as a function of memory demand in all 100 trials. (C) Group-level average reaching error as a function of memory demand in the first 25 trials. (B) Results for 18 individual participants as a function of memory demand the first 25 trials.

Reaching errors for each individual trial in 18 healthy adult participants in the Memory Delay experiment.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Non-delayed standard condition. Blue: Delayed condition. Additional testing including only the first 25 trials continued to yield significant differences between the standard (2.00° ± .21°) and delayed (3.37° ± .92°) target conditions with respect to reaching accuracy (T = 158.00, z = 3.16, p < .01). Fig 5C and 5D demonstrate this robust finding after only a quarter of the total trials, affirming that the length of the total experiment could be substantially shorter the original and still reliably distinguish between trial conditions. We again tested whether reaching accuracy in the two conditions changed over the course of the experiment to evaluate whether there were any learning or fatigue effects. On a group level, the slope of the average error was not significantly different from zero in both the standard (m = .0019, std dev = .01, t17 = -.006, p = .464) and delayed condition (m = .000674, std dev = .02, t17 = .45, p = .773). Thus, as with the first experiment, there were no significant changes in accuracy over time.

Experiment 3—Visible/Invisible Hand after cerebellar stroke

We focused on the Visible/Invisible Hand experiment in patients with recent cerebellar strokes because the multisensory visual-proprioceptive interaction emphasizes body coordination, which is often affected by stroke [52]. This also minimized testing burden for the patients, who completed the experiment with their affected hands. In both patients, we found clear differentiation of reaching accuracy with and without assistance of vision (Fig 7A and 7C). Significant differences between the average reaching error in the visible (patient 1: 5.23° ± 2.17; patient 2: 3.49° ± 2.41°) and invisible (patient 1: 8.94° ± 3.47; patient 2: 7.56° ± 2.60°) hand conditions were found on an individual level: patient 1 U(Nvisible = 99, Ninvisible = 99) = 3872.00, z = -2.55, p = .01; patient 2 U(Nvisible = 99, Ninvisible = 99) = 8053.00, z = 7.82, p < .001. There were significant differences between the average reaching times in visible (patient 1: 1781 ± 1270 ms; patient 2: 4339 ± 6066) and invisible (patient 1: 1475 ± 916 ms; patient 2: 2922 ± 2145 ms) hand conditions (patient 1: U(Nvisible = 94, Ninvisible = 98) = 3724.50, z = -2.29, p = .02); patient 2: U(Nvisible = 95, Ninvisible = 92) = 3615.000, z = -2.04, p = .04).
Fig 7

Results of the Visible/Invisible Hand experiment in patients with recent cerebellar strokes.

(A) Reaching error as a function of hand visibility in all 100 trials in patient 1. Yellow: Non-delayed standard condition. Blue: Delayed condition. Error bars denote the standard error of the mean. (B) Reaching error as a function of hand visibility in the first 25 trials in patient 1. (C) Reaching error as a function of hand visibility in all 100 trials in patient 2. (D) Reaching error as a function of hand visibility in the first 25 trials in patient 2.

Results of the Visible/Invisible Hand experiment in patients with recent cerebellar strokes.

(A) Reaching error as a function of hand visibility in all 100 trials in patient 1. Yellow: Non-delayed standard condition. Blue: Delayed condition. Error bars denote the standard error of the mean. (B) Reaching error as a function of hand visibility in the first 25 trials in patient 1. (C) Reaching error as a function of hand visibility in all 100 trials in patient 2. (D) Reaching error as a function of hand visibility in the first 25 trials in patient 2. We again assessed reaching accuracy after only 25 trials for each individual patient. The difference between the visible (patient 1: 5.63° ± 1.21; patient 2: 4.98° ± 3.23°) and invisible (patient 1: 11.66° ± 3.39° patient 2: 6.89° ± 3.18°) hand reaching accuracy was significant: patient 1 U(Nvisible = 25, Ninvisible = 25) = 605.00, z = 5.68, p < .001; patient 2 U(Nvisible = 25, Ninvisible = 25) = 418.00, z = 2.05, p = .04 (Fig 7B and 7D). Participant level data is shown in Fig 8.
Fig 8

Reaching errors for each individual trial in two patients with recent cerebellar stroke.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Visible-hand condition. Blue: Invisible-hand condition.

Reaching errors for each individual trial in two patients with recent cerebellar stroke.

This depiction of the data allows for visualization of data stability over the course of the experiment. Yellow: Visible-hand condition. Blue: Invisible-hand condition. Given the weakness and fatigue associated with cerebellar stroke, we evaluated the slope of the reaching error over time in each individual participant to assess for changes in accuracy over the course of the experiment. To determine statistical significance, we performed a bootstrap analysis in which we generated 10,000 bootstrap data sets. In each data set, trials were randomly resampled without replacement, thus retaining the overall distribution of the results but eliminating any temporal patterns of performance. This allowed us to assess the probability that the observed slopes (Fig 8) differed from zero. In the visible hand condition, patient 1 had a slope of -.024 (p = .002) and patient 2 had a slope of -.025 (p = .001)—both showing significant improvement in performance over time. In the invisible hand condition, patient 1 had a negative slope of -.055 (p = < .0001) and patient 2 had a positive slope of .019 (p = .042). These findings show a mix of improvement and worsening that may reflect a learning effect or fatigue throughout the experiment.

Discussion

Our results provide early evidence for the utility of built-in hand tracking in head-mounted VR equipment to quickly capture precise information about reaching accuracy. We were able to establish a significant faciliatory effect of vision on reaching accuracy (Fig 3) and demonstrate that adding memory demands impairs reaching accuracy (Fig 5). Our findings that people reach more accurately and precisely, though not more quickly, toward a point when they can see their hand and when the target is visible are not surprising. They confirm earlier data that vision improves accuracy and precision during reaching [58,59] and that reaching accuracy and precision deteriorate when memory is required to locate the target [60,61]. Rather, the novelty of the methods outlined in this paper lies in the manipulation of the sensory experience beyond what is possible in physical reality while collecting robust, consistent data anywhere in a matter of minutes. By controlling the visual feedback provided by the hand rendering, we are able to uncouple vision and proprioception in the Visible/Invisible Hand experiment, offering a window into how these sensory modalities interact. Typically vision and proprioception are difficult to tease apart without the use of complex equipment such as mirrors [62] and robotics [63], but the use of this new VR technology allows for easy and modifiable adaptations. For example, instead of removing the visual representation of the hand, the rendering of the hand could instead be delayed or shifted to a different location to measure how these changes influence the weighting of visual and proprioceptive information. This weighting remains poorly understood in various clinical populations–such as cerebral palsy [64,65], Parkinson’s disease [66,67], and autism spectrum disorder [68,69]–that will benefit from research that can isolate and analyze the contributions of each sense and how they change over time. By introducing a delay and requiring the participants to conduct their reaching movements based on recall, the Memory Delay experiment further assesses reaching in circumstances that require greater cognitive resources. While the delay in this paradigm was relatively short at 1 second, it still has a clear effect on the reaching accuracy. While this effect of memory is expected, our approach offers a way to investigate the spatial representation of memory in a three-dimensional setting. The environment can remain tightly controlled while objects are manipulated, allowing for structured and replicable assessments of spatial memory and navigation. Populations such as older adults and people with recent traumatic brain injury will benefit from further research on the interaction between memory and the ability to navigate a three-dimensional space [70,71]. Our study also contributes to decades of research confirming benefits when multisensory information is available in domains as varied as memory [72], learning [73], and reaction time [74]. In validating the use of VR to study multisensory processes, this new technique provides the capacity to expand on these traditional paradigms to evaluate participants as they move interactively with their environment. Overall, this approach allows for the measurement of action-perception data in a multisensory, naturalistic setting that can be adapted to mimic a variety of real-life scenarios better than the simple and predictable conditions typically found in the lab. Critically, these experiments also show that VR can be used to efficiently and effectively measure reaching accuracy not only in healthy individuals, but also in those with vision or motor disabilities caused by cerebellar infarct. The self-paced nature of these experiments means that they can be adapted to suit individuals with limited mobility, and the ability to adjust the inter-pupillary distance and head position allows for reasonable correction of minor visual issues, as done with the first patient’s diplopia. These features allow for the collection of baseline information on post-stroke gross and fine motor skills at a very early stage of recovery and provide the opportunity to potentially distinguish between the effects of ocular and cerebellar issues. Of note, both the results with healthy young adults and those with patients were found to be significant after only a fraction of the trials, indicating that the task could be substantially shortened and still provide a sufficiently precise measure of reaching accuracy. This rapid pace is particularly significant in the context of individuals with muscle weakness who may not be able to sustain activity for long periods of time. Our results also show that even over a limited number of trials individuals with recent stroke demonstrate changes in their reaching accuracy, suggesting that this paradigm is sensitive to improvement or deterioration, critical for use in rehabilitative training. Of note, we detected a dissociation between the amount of fatigue in the isolated proprioception trials and the visual-proprioceptive integration trials in one of the stroke patients. The ability to measure these differences offers exciting opportunities to learn more about how specific sensory properties are affected by stroke. Moreover, the back-and-forth reaching design of our experiments mimics a clinical evaluation of motor coordination called the finger-to-nose test. By evaluating a patient’s ability to quickly and accurately reach for both an externally-referenced target (the administrator’s finger) and a self-referenced target (the patient’s nose), this clinical test serves as rapid yet imprecise way to measure coordination. Many clinicians use the finger-to-nose test to measure upper-body coordination over the course of stroke recovery [75,76], but it remains a subjective tool with limited external validity. Using our VR paradigm, these same fine motor skills can be assessed in a way that provides detailed measurements without the need of a trained clinician to administer a coordination assessment. As preliminary work, this study contains several limitations. While there are many benefits to the flexibility of a VR experience, the self-guided nature of it does introduce some differences in the stimulus presentation from person to person. This technique achieves more realistic interactions in a less repetitive and predictable environment, but does somewhat decrease the degree of control the experimenter has over the consistency of the experience. The experiments detailed above were self-paced, meaning that some participants could choose to move quickly and may be prone to greater errors while others could choose to take their time and demonstrate higher accuracy. Future work in which rate of action is a concern can employ a system to artificially pace the participant could be introduced. With this present study however, because each participant served as their own control and the trials of the two conditions in each experiment were randomly intermixed, we believe that the differences between conditions remains a valid metric of accuracy differentiation on an individual basis. This single-subject design also accounts for any variability in familiarity with VR, which otherwise could have provided an advantage to those who have used VR in the past. The technology itself also has limitations, as the hand tracking accuracy has limitations associated with camera frame rate and figure/ground segmentation issues. These problems could cause gaps in tracking that may influence results, but the environment was well-lit and kept clear of objects that would interfere with tracking to reduce these confounds during each experiment. Our sample size of adults with recent cerebellar stroke is small and is not representative of the wide variability of motor and visual complications that can be caused by a stroke. Our feasibility experiment intends only to show that VR is sensitive, adaptable, can be used by individuals with a variety of limitations, and can be conducted at the bedside. The patient group is also solely comprised of older adults, indicating that at this stage limited conclusions can be made about the role of recent stroke because age is a strong confounding factor. Future work should include a sample of healthy older adults who can be compared to the group of older adults with recent stroke to evaluate accuracy and learning differences.

Conclusion

This paper highlights the promising application of commercially available virtual reality headsets to efficiently study perceptual and motor processing during naturalistic hand movements. Differences in reaching accuracy in various conditions were measurable in a short amount of time with very few trials. By studying the action-perceptual loop in a dynamic, multisensory environment, the field of psychophysics can move closer to understanding how perception varies across real-life settings. The adaptability and mobility of this equipment also offers opportunities to uncouple visual and proprioceptive cues to study the weighting and interaction of these domains in clinical populations in any setting. As affordable and accessible technology, future work incorporating additional participant groups and multisensory environments offers great potential to understand how different factors affect sensory processing. (MP4) Click here for additional data file. (MP4) Click here for additional data file. (ZIP) Click here for additional data file.

Video of example Visible/Invisible Hand experiment trials.

(ZIP) Click here for additional data file.

Video of example Memory Delay experiment trials.

(ZIP) Click here for additional data file. 22 Jun 2022
PONE-D-22-07057
Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke
PLOS ONE Dear Dr. Isenstein, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers find this study to be interesting, however, one reviewer does not feel the issues raised in the introduction have been addressed in the study. Further, some clarification in the methods is needed, and concerns with the statistical analyses are noted. The authors should take into consideration the comments below. Please submit your revised manuscript by Aug 06 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Krista Kelly, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf Additional Editor Comments (if provided): 1) Why was timing information not included? Impaired performance can be shown in more than just accuracy, for example reaction time (time to reach onset, time to reach dot etc..) as well as precision (noted by Reviewer 2). 2) The figures provided are pixelated and hard to read. Please upload ones that adhere to the PLOS ONE policies. 3) How was sample size calculated? Did the authors have enough power with 20 participants? Further, there is no statistical analysis section. Please add a section on power and statistical analyses to the methods. 4) The reviewers had issues with the statistical analyses that need to be addressed. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is an interesting paper - the introduction raises some important issues but I don't necessarily feel these issues have been addressed in the study. I'm still not exactly sure how the research advances the field - I think this is most that it has not been clearly articulated and in a way that makes sense to lay readers. Some specifics: 1. The introduction is very long and is under referenced. Could it be a little more concise (and references add to support statements where needed) 2. The abstract lacks detail on the design of the studies and number of participants 3. Use of the term 'subjects' is not ideal - should be participants 4. The research question (at the end of the intro is not specific) - how is feasibility defined? this is important given specificity of the research methods 5. Results (number, characteristics of participants) are presented in the 'methods' 6. There are several figures presented which are hard to interpret - perhaps a more selective approach and better explanation would help. I am not able to assess the statistics. Reviewer #2: Review of PONE-D-22-07057 Summary The authors aim at testing the feasibility of a Virtual Reality system in assessing reaching performance, both in healthy adults and in pathological populations. The study is designed around a relatively standard paradigm about reaching accuracy, in which participants were to reach to a visual target, either with vision of their virtual hand or not, and either with online visibility of the target or not (memory-guided). In addition, a short version of the first task was applied to two stroke patients. The results show the well-established phenomenon that reach endpoint accuracy is poorer when the moving hand and/or the target is invisible. However, the authors want to emphasize as the main novelty of this work, the feasibility of using such VR systems for easy, portable, and reliable measures of human fine motor skills. Main comments The study is well-written, easy to follow, and makes its contributions explicit. It is technically sound, the performed analyses make sense, and the results are plausible. I do not have many comments, nor I feel that any of these require particular attention before recommending to the editor that this manuscript can be part of PLoS One table of contents. I would nevertheless encourage the authors to address the following issues. 1. The assessment of the virtual reality tool is based on the measurement of endpoint constant error (accuracy). This is a common measure in visuomotor research, and captures well the effects of visual information on manual control of action. I would have expected that the authors would assess the tools in a bit more detail though. Accompanying measures of endpoint accuracy is often that of endpoint precision (variable error), which may or may not follow the same pattern of endpoint accuracy (e.g., Monaco et al., 2010, Exp Brain Res). Considering the available data, I believe that reporting about endpoint precision (e.g., area and orientation of 95 CI) in the manuscript would meaningfully enrich the contents of the manuscript. 2. The authors base much of their motivation for this study on the premise that typical experiments impose relatively simple stimuli and restricted scenarios (e.g., lines 64-66), implying that these experiments are far from real-world activities, which is the gap that Virtual Reality can close. Though this is partly true, recently there is a growing number of studies implementing real-world naturalistic experiments. Examples include the recording of body and gaze when walking in the nature (e.g., Matthis et al., 2018, Curr Biol; Valsecchi et al., 2020, ACM in Eye Tracking), or when grasping objects (e.g., Land & Hayhoe, 2001, Vis Res; Voudouris et al., 2019, JoV), just to name a few examples. Without arguing against the importance of assessing Virtual Reality tools on measuring human behavior, the authors should acknowledge that research on the field was not only typically constrained, but that it has also been expanding in real-world settings. 3. Regarding the task itself, I have two main comments: First, as seen in the supplementary videos, the recorded participant is remarkably accurate, considering that the target sphere ‘explodes’ (hit) in every single trial, even when the moving hand or the target sphere is invisible. Was the target sphere always exploding, no matter whether the participant correctly hit its center? Or is the demonstrated participant a well-trained one? This should be clarified. Second, the endpoint error is measured as the distance between the center of the sphere and the position of the index finger at the moment when the finger crossed the 60-deg arc. Did the authors consider both remaining directions (lateral error and vertical error) when calculating this error or not? 4. Regarding the analysis, I am not convinced that measuring the p-value as the trials develop is a useful method. First of all, running constantly statistical tests can lead to false positives (even more so when the authors do not even correct for the abundant number of statistical tests). The authors should first explain what is the underlying reason for examining this aspect –is there an underlying hypothesis, which might be addressed in another way? From how I understand the development of the authors’ rationale, their aim is to see at what stage the experimental effect is systematic/robust, so that they can suggest how much the experiment can be reduced with respect to the number of trials and therefore to the time required. If this is the case, running too many statistical tests may also cause the impression that a few trials are necessary to reproduce the effect, but this impression may be false due to a type I error. An alternative idea would be to split each block of trials performed by each subject in smaller subblocks of, say, 25 trials each, and then run a one-way ANOVA with four levels (epochs) to test whether these change over time. In any case, I recommend removing the analysis about the temporal evolution of the p-value from all sections of the manuscript. 5. More specific comments Line 133: muscle contraction and visual perception in the same sentence, next to each other, read somewhat odd. It feels there should be a connection between the two, but I do not see any apparent one. Perhaps some revision would help here. Line 138: How ‘small’ is ‘small’? Writing this sample size in a more explicit way will make the sentence more straightforward. Line 195: “…they approached the periphery”. When periphery was approached with their hands or with some other part of their body? Lines 220-221: At what distance was the target presented? From later information, it appears that this was at arm length, but I think it should be made more explicit here already. Methods: Please mention what statistical tests are being used and whether there were normality checks supporting the use of parametric testing. Results: Reporting the p-values in a more conventional way would probably facilitate readability. Discussion: I think some parts can be shortened or merged. For instance, the contents of paragraph starting in line 470 made me think in what way VR can be used effectively to measure endpoint accuracy. Is it based on feasibility (e.g., portability, comfort, battery…) or on the fact that endpoint accuracy values reproduce previous experimental work? Reading the paragraph, it becomes apparent that the former is the correct answer, but this aspect is then back into discussion a couple of paragraphs later (e.g., starting at line 501). I feel that this paragraph is too detailed and does not contribute much, rather distracting from the main messages. One idea would be to merge such instances in more concise piece. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 27 Jul 2022 Thank you for you thoughtful and valuable comments on our manuscript. We have addressed the comments below: Editor: 1) Why was timing information not included? Impaired performance can be shown in more than just accuracy, for example reaction time (time to reach onset, time to reach dot etc..) as well as precision (noted by Reviewer 2). - We welcome the opportunity include additional data to support our findings. Data on the timing of the reaching has been added for Experiments 1 and 3, with no significant difference between the visible and invisible reaching times in either the healthy control group or the stroke group. These results show that the amount of time spent reaching is not driving the differences in accuracy between the two conditions. Unfortunately, this data is not available for Experiment 2 due to an issue with data collection. In addition, data reaching precision (as also noted by Reviewer 2) has been added for all three experiments. These findings show a significant effect of precision in Experiment 1 (p = .04) and in Experiment 2 (p < .01). Of note, we also found significant correlations between precision and accuracy in both Experiment 1 (visible p < .01; invisible p =.02) and Experiment 2 (standard p = .04; delayed p < .01). 2) The figures provided are pixelated and hard to read. Please upload ones that adhere to the PLOS ONE policies. - Thank you for bringing this to our attention. We changed the file type and this seems to have resolved the issue. 3) How was sample size calculated? Did the authors have enough power with 20 participants? Further, there is no statistical analysis section. Please add a section on power and statistical analyses to the methods. - At your suggestion, a specific section on statistical analysis has been added to make the statistical information clearer and more accessible – thank you for indicating that this would benefit our paper. Because this was a novel paradigm and prior data of this nature was not available, we were unable to initially conduct a power analysis. As such we selected 18-20 as our sample size because if greater than 20 participants were needed to find a significant result then the utility of our approach would be questionable. - - We have now also conducted post-hoc power analyses to demonstrate the statistical power of our results; as we expected, power was high. In Experiment 1, post-hoc power analysis to achieve a power of .8 yielded a minimum sample of 3 when all 100 trials are included and a minimum sample of 8 when only the first 25 trials were included. In Experiment 2, post-hoc power analysis to achieve a power of .8 yielded a minimum sample of 4 when all 100 trials were included a minimum sample of 11 when only the first 25 trials were included. However, we opted to not include these post-hoc power analyses in the final manuscript because it does not offer new information about the credibility of the results. - 4) The reviewers had issues with the statistical analyses that need to be addressed. - Concerns regarding the statistical analysis have been addressed by removing the cumulative p value plots. Instead, we illustrate the robustness of our approach by including a secondary analysis on just the first 25 trials in each condition. Figures 2 and 5 now include bar and line plots showing the group and individual level results after only 25 trials. In the text we also detail that in Experiment 1 after 25 trials the p value comparing visible and invisible hand reaching accuracy was < .001, in Experiment 2, the p value comparing standard and delayed reaching was < .01, and in Experiment 3 patient one had a p value of < .001 and patient 2 had a p value of .04. Reviewer 1: This is an interesting paper - the introduction raises some important issues but I don't necessarily feel these issues have been addressed in the study. I'm still not exactly sure how the research advances the field - I think this is most that it has not been clearly articulated and in a way that makes sense to lay readers. Some specifics: 1. The introduction is very long and is under referenced. Could it be a little more concise (and references add to support statements where needed) - Per the reviewer’s suggestion, we have made the introduction clearer and more concise, and agree that these changes improve the overall flow and clarity of the paper. Additional references have been added throughout to support the background and context of the introduction. 2. The abstract lacks detail on the design of the studies and number of participants - Thank you for catching this oversight - additional detail on the design and size of the experiments has been added to the abstract. 3. Use of the term 'subjects' is not ideal - should be participants - The term ‘subject’ has been replaced by ‘participant’ throughout the manuscript. 4. The research question (at the end of the intro is not specific) - how is feasibility defined? this is important given specificity of the research methods - We appreciate the reviewer’s inquiry regarding the overarching goal of the paper, to gauge the feasibility of this kind of research. The research question, whether we can use virtual reality technology to assess fine motor skills in healthy adults as well as those with recent stroke, has been clarified at the end of the introduction. 5. Results (number, characteristics of participants) are presented in the 'methods' - We have moved the key participant information to the beginning of the Results section. 6. There are several figures presented which are hard to interpret - perhaps a more selective approach and better explanation would help. I am not able to assess the statistics. - Please see the fourth response to editor comments above for a full response in how we have made the figures more accessible. In short, the cumulative p value plots have been removed and instead we have added plots and analyses focusing on the accuracy and precision after only 25 trials. Reviewer 2: Summary The authors aim at testing the feasibility of a Virtual Reality system in assessing reaching performance, both in healthy adults and in pathological populations. The study is designed around a relatively standard paradigm about reaching accuracy, in which participants were to reach to a visual target, either with vision of their virtual hand or not, and either with online visibility of the target or not (memory-guided). In addition, a short version of the first task was applied to two stroke patients. The results show the well-established phenomenon that reach endpoint accuracy is poorer when the moving hand and/or the target is invisible. However, the authors want to emphasize as the main novelty of this work, the feasibility of using such VR systems for easy, portable, and reliable measures of human fine motor skills. Main comments The study is well-written, easy to follow, and makes its contributions explicit. It is technically sound, the performed analyses make sense, and the results are plausible. I do not have many comments, nor I feel that any of these require particular attention before recommending to the editor that this manuscript can be part of PLoS One table of contents. I would nevertheless encourage the authors to address the following issues. 1. The assessment of the virtual reality tool is based on the measurement of endpoint constant error (accuracy). This is a common measure in visuomotor research, and captures well the effects of visual information on manual control of action. I would have expected that the authors would assess the tools in a bit more detail though. Accompanying measures of endpoint accuracy is often that of endpoint precision (variable error), which may or may not follow the same pattern of endpoint accuracy (e.g., Monaco et al., 2010, Exp Brain Res). Considering the available data, I believe that reporting about endpoint precision (e.g., area and orientation of 95 CI) in the manuscript would meaningfully enrich the contents of the manuscript. - Thank you for this insightful comment. We agree that adding endpoint precision as a metric of performance will substantially strengthen the paper, and we have added this information for all three experiments, with the results detailed in the first response to the editor. However, we unfortunately do not have information on the orientation of the reaching error, as only the absolute error regardless of direction was recorded. 2. The authors base much of their motivation for this study on the premise that typical experiments impose relatively simple stimuli and restricted scenarios (e.g., lines 64-66), implying that these experiments are far from real-world activities, which is the gap that Virtual Reality can close. Though this is partly true, recently there is a growing number of studies implementing real-world naturalistic experiments. Examples include the recording of body and gaze when walking in the nature (e.g., Matthis et al., 2018, Curr Biol; Valsecchi et al., 2020, ACM in Eye Tracking), or when grasping objects (e.g., Land & Hayhoe, 2001, Vis Res; Voudouris et al., 2019, JoV), just to name a few examples. Without arguing against the importance of assessing Virtual Reality tools on measuring human behavior, the authors should acknowledge that research on the field was not only typically constrained, but that it has also been expanding in real-world settings. - We appreciate this important consideration being pointed out. Several studies have now been referenced at the end of the first paragraph of the introduction exemplifying other work that has been done to conduct research in a more naturalistic setting. 3. Regarding the task itself, I have two main comments: First, as seen in the supplementary videos, the recorded participant is remarkably accurate, considering that the target sphere ‘explodes’ (hit) in every single trial, even when the moving hand or the target sphere is invisible. Was the target sphere always exploding, no matter whether the participant correctly hit its center? Or is the demonstrated participant a well-trained one? This should be clarified. Second, the endpoint error is measured as the distance between the center of the sphere and the position of the index finger at the moment when the finger crossed the 60-deg arc. Did the authors consider both remaining directions (lateral error and vertical error) when calculating this error or not? - We have clarified that the target explodes regardless of where along the 60 degree arc the finger passes through, not just when the target itself was touched. We now also directly mention that the reaching error captures both vertical and horizontal error. 4. Regarding the analysis, I am not convinced that measuring the p-value as the trials develop is a useful method. First of all, running constantly statistical tests can lead to false positives (even more so when the authors do not even correct for the abundant number of statistical tests). The authors should first explain what is the underlying reason for examining this aspect –is there an underlying hypothesis, which might be addressed in another way? From how I understand the development of the authors’ rationale, their aim is to see at what stage the experimental effect is systematic/robust, so that they can suggest how much the experiment can be reduced with respect to the number of trials and therefore to the time required. If this is the case, running too many statistical tests may also cause the impression that a few trials are necessary to reproduce the effect, but this impression may be false due to a type I error. An alternative idea would be to split each block of trials performed by each subject in smaller subblocks of, say, 25 trials each, and then run a one-way ANOVA with four levels (epochs) to test whether these change over time. In any case, I recommend removing the analysis about the temporal evolution of the p-value from all sections of the manuscript. - Please see the fourth response to editor comments above for a full response in how we have made the figures more accessible. In short, the cumulative p value plots have been removed and instead we have added plots and analyses focusing on the accuracy and precision after only 25 trials, showing that the total experiment could be a quarter of the original length and still be sensitive enough to the differences between conditions. 5. More specific comments Line 133: muscle contraction and visual perception in the same sentence, next to each other, read somewhat odd. It feels there should be a connection between the two, but I do not see any apparent one. Perhaps some revision would help here. Line 138: How ‘small’ is ‘small’? Writing this sample size in a more explicit way will make the sentence more straightforward. - REMOVED Line 195: “…they approached the periphery”. When periphery was approached with their hands or with some other part of their body? Lines 220-221: At what distance was the target presented? From later information, it appears that this was at arm length, but I think it should be made more explicit here already. - Thank you for such helpful and specific comments on individual pieces of language throughout the paper! The have all been addressed and revised according to your suggestions. Methods: Please mention what statistical tests are being used and whether there were normality checks supporting the use of parametric testing. - We have also added a specific section for statistical testing to make it clear what is being done. All data has been checked for normality and due to the presence of least one non-normal condition in each experiment, Wilcoxon Signed Rank Tests have been used to replace the t tests. These changes have not affected the patterns of significance previously reported. Results: Reporting the p-values in a more conventional way would probably facilitate readability. - Please see the above responses regarding how the p values are now being reported. Discussion: I think some parts can be shortened or merged. For instance, the contents of paragraph starting in line 470 made me think in what way VR can be used effectively to measure endpoint accuracy. Is it based on feasibility (e.g., portability, comfort, battery…) or on the fact that endpoint accuracy values reproduce previous experimental work? Reading the paragraph, it becomes apparent that the former is the correct answer, but this aspect is then back into discussion a couple of paragraphs later (e.g., starting at line 501). I feel that this paragraph is too detailed and does not contribute much, rather distracting from the main messages. One idea would be to merge such instances in more concise piece. - We agree that the discussion was somewhat circuitous, and it has now been slightly restructured to make it more concise and clear about what we want the main takeaways from the paper to be. Thank you for your advice. Submitted filename: Response to Reviewers.docx Click here for additional data file. 18 Aug 2022
PONE-D-22-07057R1
Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke
PLOS ONE Dear Dr. Isenstein, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Specifically, the reviewer is requesting more of a discussion on action in the Introduction, and clarification for some of the Methods and Results regarding the direction of the error that is recorded and the reaching time. Other minor suggestions are provided.
Please submit your revised manuscript by Oct 02 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Krista Kelly, Ph.D. Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have no additional feedback for the authors. They have addressed my previous feedback satisfactorily. Reviewer #2: The authors have addressed all my comments and I believe that the manuscript now reads more focused and clearer. I still have some remaining comments, some of which arise due to the revisions whereas a few others concern issues that I could have spotted in the first round. Apologies for not bringing these up earlier. All lines mentioned below concern the track-changes document. Abstract: the first lines of the abstract greatly focus on the ‘perception’ side and the limitations of the traditional experiments on perception. Though this true, this study here focuses more on action/behavior. I would encourage the authors to either bring the ‘action’ part more explicitly, or even discuss the perception-action part altogether (than only talking about ‘perception’). Introduction, paragraph 1 (lines 55-68): Similarly here, the focus appears to be on ‘perception’. Yes, perception research started with rather passive, highly controlled designs, but this has now evolved to more naturalistic experiments. In the same line, ‘action’ research started with constrained paradigms, such as the studies of Jeannerod on grasping, but has evolved to whole-body tracking in the nature, as the authors also cite the work of Matthis and colleagues. Therefore, as in the abstract, I recommend also here to focus also on action, not only on perception. Methods, lines 149-154: Please introduce the participants of experiment 3 somewhere here. This is important otherwise the term ‘healthy’ in line 152 reads odd, while doing so will facilitate reading of the subsequent section (e.g., lines 161-162). Line 207: It remains unclear whether the error is measured only on the lateral direction or the vertical direction is also accounted. Please make this more explicit. Line 209: did the reaching time indeed consider the moment when the participant ‘touched the target’? In other instances (e.g., lines 199-201), it is mentioned that the trial ended when the participant crossed the arc, implying that this would be used as the reaching time. If reaching time is indeed measured based on when the participant touched the target, what happened in trials in which the participant did not touch the target? If the reaching time is measured based on the time when the participant crossed the arc, this line needs to be revised (as also other instances throughout the manuscript, for instance line 232, 262, and possibly elsewhere). Lines 226-227: Could you please clarify whether the 100 trials of one of the conditions were presented before/after the 100 trials of the other condition, or whether the two conditions were randomly presented randomly interleaved across the 200 trials? Line 276: “…of their accuracy”. I think the authors here should state “…of their endpoints”. Also, is precision calculated as the SD along the lateral direction only? Please clarify in the manuscript. Lines 309-312: These here read as ‘methods’ but are presented in the ‘results’. My recommendation is to define the three measures (accuracy, precision, reaching time) in the Methods, one after the other, so that it easier for the reader to follow the analysis. Line 326: “of this measure”. Which measure? The previous part refers to two different variables, accuracy and precision. Why not calculate precision and reaching time also on the basis of the first 25 trials? Then the authors, for each experiment, would have a first paragraph with results about the three measures (accuracy, precision, time) considering all trials, and a second paragraph with the respective results when considering only the first 25 trials. Then the results would be easier to follow and interpret. If the authors would like to focus mainly on accuracy, this should become more explicit, ideally with a reason why. Lines 333-335: Does this refer to reaching time considering all trials or only the first 25 trials of each condition? ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
8 Sep 2022 Thank you for the additional attentive comments on our manuscript. We have addressed the comments below: Reviewer #2: 1. The authors have addressed all my comments and I believe that the manuscript now reads more focused and clearer. I still have some remaining comments, some of which arise due to the revisions whereas a few others concern issues that I could have spotted in the first round. Apologies for not bringing these up earlier. All lines mentioned below concern the track-changes document. a. Thank you again for the valuable comments, we agree that the paper is more streamlined and easier to follow. 2. Abstract: the first lines of the abstract greatly focus on the ‘perception’ side and the limitations of the traditional experiments on perception. Though this true, this study here focuses more on action/behavior. I would encourage the authors to either bring the ‘action’ part more explicitly, or even discuss the perception-action part altogether (than only talking about ‘perception’). a. We have expanded the beginning of the abstract to emphasize the relevance of both perception and action to this manuscript. 3. Introduction, paragraph 1 (lines 55-68): Similarly here, the focus appears to be on ‘perception’. Yes, perception research started with rather passive, highly controlled designs, but this has now evolved to more naturalistic experiments. In the same line, ‘action’ research started with constrained paradigms, such as the studies of Jeannerod on grasping, but has evolved to whole-body tracking in the nature, as the authors also cite the work of Matthis and colleagues. Therefore, as in the abstract, I recommend also here to focus also on action, not only on perception. a. The introduction now contains additional mention of the interplay of action and perception as well as past limitations of action research, such as small or restricted movements. 4. Methods, lines 149-154: Please introduce the participants of experiment 3 somewhere here. This is important otherwise the term ‘healthy’ in line 152 reads odd, while doing so will facilitate reading of the subsequent section (e.g., lines 161-162). a. An introduction to the participants in Experiment 3 has been added at the beginning of the Methods section. 5. Line 207: It remains unclear whether the error is measured only on the lateral direction or the vertical direction is also accounted. Please make this more explicit. a. Further clarification has been added that the error accounts for error in both the horizontal and vertical direction. 6. Line 209: did the reaching time indeed consider the moment when the participant ‘touched the target’? In other instances (e.g., lines 199-201), it is mentioned that the trial ended when the participant crossed the arc, implying that this would be used as the reaching time. If reaching time is indeed measured based on when the participant touched the target, what happened in trials in which the participant did not touch the target? If the reaching time is measured based on the time when the participant crossed the arc, this line needs to be revised (as also other instances throughout the manuscript, for instance line 232, 262, and possibly elsewhere). a. Thank you for pointing out this discrepancy, we have clarified throughout the paper that the reaching time begins when the target appears and end when the index finger passes through the arc, rather than when it specifically touches the target. 7. Lines 226-227: Could you please clarify whether the 100 trials of one of the conditions were presented before/after the 100 trials of the other condition, or whether the two conditions were randomly presented randomly interleaved across the 200 trials? a. We have clarified that the two conditions of trials are randomly interspersed together in all experiments. 8. Line 276: “…of their accuracy”. I think the authors here should state “…of their endpoints”. Also, is precision calculated as the SD along the lateral direction only? Please clarify in the manuscript. a. We have clarified here that endpoint accuracy is the metric being discussed. 9. Lines 309-312: These here read as ‘methods’ but are presented in the ‘results’. My recommendation is to define the three measures (accuracy, precision, reaching time) in the Methods, one after the other, so that it easier for the reader to follow the analysis. a. We have reorganized this information so the beginning of the Statistical Analysis section now defines the three metrics of reaching accuracy, reaching precision, and reaching time together. 10. Line 326: “of this measure”. Which measure? The previous part refers to two different variables, accuracy and precision. Why not calculate precision and reaching time also on the basis of the first 25 trials? Then the authors, for each experiment, would have a first paragraph with results about the three measures (accuracy, precision, time) considering all trials, and a second paragraph with the respective results when considering only the first 25 trials. Then the results would be easier to follow and interpret. If the authors would like to focus mainly on accuracy, this should become more explicit, ideally with a reason why. a. Thank you for pointing out this unclear phrasing; we have clarified that we referred to the sensitivity of the experiment to assess reaching accuracy. We have also stated that reaching accuracy was the primary outcome measure because it has the greatest clinical significance and effect on quality of life and independence. 11. Lines 333-335: Does this refer to reaching time considering all trials or only the first 25 trials of each condition? a. This line has been moved to clearly indicate that it is referring to reaching time across all 100 trials. Submitted filename: Response to Reviewers 2.docx Click here for additional data file. 13 Sep 2022 Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke PONE-D-22-07057R2 Dear Dr. Isenstein, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Krista Kelly, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): No further comments. Reviewers' comments: 20 Sep 2022 PONE-D-22-07057R2 Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke Dear Dr. Isenstein: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Krista Kelly Academic Editor PLOS ONE
  72 in total

Review 1.  Naturalistic Stimuli in Neuroscience: Critically Acclaimed.

Authors:  Saurabh Sonkusare; Michael Breakspear; Christine Guo
Journal:  Trends Cogn Sci       Date:  2019-06-27       Impact factor: 20.229

2.  A novel fully immersive virtual reality environment for upper extremity rehabilitation in patients with stroke.

Authors:  Destaw B Mekbib; Dereje Kebebew Debeli; Li Zhang; Shan Fang; Yuling Shao; Wei Yang; Jiawei Han; Hongjie Jiang; Junming Zhu; Zhiyong Zhao; Ruidong Cheng; Xiangming Ye; Jianmin Zhang; Dongrong Xu
Journal:  Ann N Y Acad Sci       Date:  2021-01-14       Impact factor: 5.691

3.  Applying Virtual Reality to Audiovisual Speech Perception Tasks in Children.

Authors:  Maeve Salanger; Dawna Lewis; Timothy Vallier; Tessa McDermott; Andrew Dergan
Journal:  Am J Audiol       Date:  2020-04-06       Impact factor: 1.493

4.  Maximizing post-stroke upper limb rehabilitation using a novel telerehabilitation interactive virtual reality system in the patient's home: study protocol of a randomized clinical trial.

Authors:  Dahlia Kairy; Mirella Veras; Philippe Archambault; Alejandro Hernandez; Johanne Higgins; Mindy F Levin; Lise Poissant; Amir Raz; Franceen Kaizer
Journal:  Contemp Clin Trials       Date:  2015-12-04       Impact factor: 2.226

5.  Effects of Parkinson's disease on proprioceptive control of posture and reaching while standing.

Authors:  M Tagliabue; G Ferrigno; F Horak
Journal:  Neuroscience       Date:  2008-12-14       Impact factor: 3.590

Review 6.  The need for a cognitive neuroscience of naturalistic social cognition.

Authors:  Jamil Zaki; Kevin Ochsner
Journal:  Ann N Y Acad Sci       Date:  2009-06       Impact factor: 5.691

7.  Virtual Reality Reflection Therapy Improves Balance and Gait in Patients with Chronic Stroke: Randomized Controlled Trials.

Authors:  Taesung In; Kyeongjin Lee; Changho Song
Journal:  Med Sci Monit       Date:  2016-10-28

8.  How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

Authors:  Andrea Desantis; Patrick Haggard
Journal:  Sci Rep       Date:  2016-12-16       Impact factor: 4.379

9.  The 'Real-World Approach' and Its Problems: A Critique of the Term Ecological Validity.

Authors:  Gijs A Holleman; Ignace T C Hooge; Chantal Kemner; Roy S Hessels
Journal:  Front Psychol       Date:  2020-04-30

10.  Effects of a Rehabilitation Program Using a Wearable Device on the Upper Limb Function, Performance of Activities of Daily Living, and Rehabilitation Participation in Patients with Acute Stroke.

Authors:  Yun-Sang Park; Chang-Sik An; Chae-Gil Lim
Journal:  Int J Environ Res Public Health       Date:  2021-05-21       Impact factor: 3.390

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.