Literature DB >> 32287279

Self-controlled practice and nudging during structural learning of a novel control interface.

Mei-Hua Lee1, Shanie A L Jayasinghe1.   

Abstract

Self-controlled practice schedules have been shown to enhance motor learning in several contexts, but their effectiveness in structural learning tasks, where the goal is to eventually learn an underlying structure or rule, is not well known. Here we examined the use of self-controlled practice in a novel control interface requiring structural learning. In addition, we examined the effect of 'nudging'-i.e., whether altering task difficulty could influence self-selected strategies, and hence facilitate learning. Participants wore four inertial measurement units (IMUs) on their upper body and the goal was to use motions of the upper body to move a screen cursor to different targets presented on the screen. The structure in this task that had to be learned was based on the fact that the signals from the IMUs were linearly mapped to the x- and y- position of the cursor. Participants (N = 62) were split into 3 groups (random, self-selected, nudge) based on whether they had control over the sequence in which they could practice the targets. To test whether participants learned the underlying structure, participants were tested both on the trained targets, as well as novel targets that were not practiced during training. Results showed that during training, the self-selected group showed shorter movement times relative to the random group, and both self-selected and nudge groups adopted a strategy of tending to repeat targets. However, in the test phase, we found no significant differences in task performance between groups, indicating that structural learning was not reliably affected by the type of practice. In addition, nudging participants by adjusting task difficulty did not show any significant benefits to overall learning. These results suggest that although self-controlled practice influenced practice structure and facilitated learning, it did not provide any additional benefits relative to practicing on a random schedule in this task.

Entities:  

Mesh:

Year:  2020        PMID: 32287279      PMCID: PMC7156047          DOI: 10.1371/journal.pone.0223810

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Given that practice time is often limited in real-world tasks, designing practice schedules that maximize learning within a short period of training is crucial for efficient use of the learner’s time and effort. A key element in this regard involves determining who is in control of the practice schedule. In this context, self-controlled practice schedules—i.e., allowing the learner to determine aspects of practice, has emerged as an important means by which learning can be facilitated. The benefits of self-controlled practice have been shown to be fairly robust across a large number of tasks and practice manipulations [1-9], and have been attributed to many factors, including increased active involvement from the learner [10], increased autonomy [11-13], and the role of informational processes [14]. In spite of this evidence for benefits of self-controlled practice in a large number of contexts, its utility in a specific type of learning—structural learning (or schema learning) has received comparatively little attention [15-17]. Structural learning in the motor context involves extraction of a general rule of a mapping during practice, which can then effectively be used for generalization. For example, when learning how to drive, the goal of the novice driver is not to learn specific movements of the steering wheel per se (e.g., turn the wheel by 90 degrees), but to learn the underlying ‘structure’ or ‘rule’ of how steering wheel movements map on to the movement of the car. Learning this structure is essential for generalization–i.e., being able to control the car in novel situations that were never practiced during training. This type of structural learning becomes even more important in the context of learning to control novel assistive devices. Consider for example an amputee learning to control a prosthetic arm using muscle activity or inertial measurement units [18]. In this case, the underlying rule of how the body motion/muscle activity maps to motion of the prosthetic arm may not be as intuitive as the mapping of steering wheel movements to a car’s motion; this situation places an even greater emphasis on the design of efficient practice schedules to learn this mapping. It is important to note that even though prior studies on self-controlled practice have used ‘transfer’ tests as a measure of generalization of learning [3,5], these have been primarily used in rather well-learned tasks (such as key pressing or throwing) where the underlying schema may already be present through prior experience. In contrast, our focus in this study was to use a novel virtual task where the structure could only be learned through practice. One important element for enhancing structural learning is the need for variability in practice conditions [15,17]. However, it is unclear how this variability needs to be incorporated into the practice schedule. On one hand, there is extensive evidence that practice sequences benefit from contextual interference–i.e., learning is generally facilitated when task variations are distributed randomly across trials, instead of being blocked together [19-21]. Moreover, there is also evidence that self-controlled determination of the practice sequence benefits learning [22,23]. However, on the other hand, these experiments with self-controlled practice schedules have been typically done in the context of multiple tasks (such as different sequences), with no underlying structure connecting these tasks. A feature of self-controlled learning that may be problematic here is that although it may benefit autonomy, it has the potential to reduce the random structure of practice because participants typically tend to engage in more ‘blocked’ practice by repeating targets, at least early on in learning [23-26]. Although one view is that this initial blocked practice is beneficial because it matches their challenge point [27], a downside is that it may foster more ‘instance-based’ learning (i.e., how to solve a particular variation), and may potentially be detrimental to ultimately learning the underlying rule or schema. A related issue with respect to self-controlled strategies is whether learning can be further enhanced by ‘nudging’ [28]–i.e., given that self-selection strategies could sometimes potentially be suboptimal because participants may focus on immediate short-term gains in performance over long-term learning benefits [29], is it possible to push learners to choose more optimal strategies that benefit learning? In the context of motor behavior, the term ‘nudge’ is closely related to the concept of ‘constraints’ [30] in that they both attempt to alter behavior, but with the main difference being that nudges do not ‘forbid’ any options or significantly alter incentives to choose one option [28]. In the current context, given prior evidence that self-controlled practice schedules may encourage too many repetitions of a difficult task (making it similar to blocked practice), we examined the effect of nudging the learner toward more random practice by manipulating task difficulty so that the perceived task difficulty across all variations was similar. In this study, we examined the effect of self-controlled practice schedules on structural learning. We used a novel body-machine interface (BoMI) paradigm [31], where participants had to control movements of the upper body to control a screen cursor [32]. Importantly, this mapping of upper body movements to cursor motion was designed to be non-intuitive so that participants could only discover the structure through practice. Practice involved virtual reaching movements to different targets presented on a screen. We examined whether (i) a self-selected practice schedule (where participants could control which targets they reached to) was superior compared to a random practice schedule where participants did not have such control, and (ii) if nudging by adjusting task difficulty influenced learning relative to self-selected strategies without nudging.

Materials and methods

Participants

We recruited 62 healthy young adults for this experiment (33 females, 29 males; age 24 ± 4 years). We obtained written informed consent from all participants prior to conducting the experiment, and procedures were approved by the IRB at Michigan State University (IRB# 14–751).

Experimental protocol

We utilized the experimental design and setup described in earlier studies [32,33] and summarize the main points for completeness. Four IMUs (3-space, YEI Technology, Ohio, USA) were placed, on the posterior and anterior ends of the acromioclavicular joint of both sides of the body using Velcro hooks to a customized vest worn by each participant. Each IMU recorded 2D (roll and yaw) orientation of the segment it was attached to at a sampling rate of 50 Hz. Participants were asked to stay seated on a chair placed 23” in front of a computer screen. The chair had a backrest but participants did not have any other restrictions on motion. We performed an initial calibration to map the IMU signals to the cursor. Briefly, participants performed ‘free exploration’ movements with their upper body within a comfortable range of motion. We then performed principal components analysis (PCA) on these data, and extracted the first 2 principal components–the first controlled the x-axis motion, and the second controlled the y-axis motion. Participants were asked to move their upper body in order to move a cursor and reach a target presented on the computer screen as fast as possible, and as close to the center of the target as possible. The circular target (radius 2.2 cm) was placed at a radial distance of 11.5 cm from the screen center. The cursor had to be inside the target for 500 ms in order for the trial to be completed. The next target could be selected only after the previous target was reached. The experiment consisted of a virtual center-out reaching task divided into 11 blocks: pre-test, training blocks 1–4, mid-test, training blocks 5–8, and the post-test. During the testing blocks, the target appeared three times in each of eight directions (4 cardinal directions, 4 diagonals), resulting in 24 trials per testing block. During the training blocks, the target appeared only along the four cardinal directions, for a total of 20 trials per training block. The number of trials at each target depended on the group that the participant was assigned to. The task was custom-made on Matlab® software (Mathworks Inc., Natick, MA, USA).

Experimental design

Participants were divided into one of three groups to test three different practice schedules: 1) Random (20 participants), 2) Self-selected (21 participants), 3) Nudge (21 participants). All groups completed the same pre-, mid-, and post-test blocks; however, the type of training given during the training blocks differed across the three groups (Fig 1).
Fig 1

Schematic of experimental setup (left) and protocol (right).

Participants wore IMUs (indicated by the little rectangles on the shoulders) and learned to move a screen cursor to different targets presented on the screen. Three groups of participants (Random, Self-selected, Nudge) practiced the task in a single session. In the eight training blocks (Training 1–8), only the cardinal direction targets were presented. In the three test blocks (Pre/Mid/Post), both cardinal and diagonal direction targets were presented to assess generalization and structural learning.

Schematic of experimental setup (left) and protocol (right).

Participants wore IMUs (indicated by the little rectangles on the shoulders) and learned to move a screen cursor to different targets presented on the screen. Three groups of participants (Random, Self-selected, Nudge) practiced the task in a single session. In the eight training blocks (Training 1–8), only the cardinal direction targets were presented. In the three test blocks (Pre/Mid/Post), both cardinal and diagonal direction targets were presented to assess generalization and structural learning. The Random group had the four practice targets presented in a randomized manner during each training block trial i.e., participants had no control over which target to practice on each trial. There was also a constraint that all 4 targets had to be performed at least once before a target could repeat. In the Self-selected group, participants were allowed to choose which of the four training targets they wanted to move their cursor to in each trial. At the start of each trial, participants were shown all 4 targets simultaneously on the screen and participants subsequently decided which target they wanted to move to for that trial. In the Nudge group, participants also had the choice of which target they wanted to practice moving to (similar to the Self-selected group); however, the size of the targets presented on the screen differed to make the perceived difficulty of all targets relatively equal (i.e., difficult targets were made larger in size, and easier targets smaller in size). Based on a participant’s performance in the pre-test, we computed their mean normalized Euclidean error for each of the 4 cardinal targets at 1 second into the movement. Then, for training blocks 1 to 4, the target for which the error was biggest was made to appear bigger than usual (25% increase in radius), and the target for which the error was smallest was made to appear smaller than usual (25% decrease in radius). The remaining two targets stayed at the usual size. The 25% increase in radius was chosen to be a ‘nudge’–i.e., it was large enough that participants could perceive it as the biggest (or smallest) target, but not so large that it essentially forced participants to choose (or eliminate) that target. For training blocks 5 to 8, the same procedure was repeated based on the Euclidean errors from the mid-test. Participants in the Self-selected and the Nudge groups were given the learning goal [34], i.e., they were instructed that while they had a choice on which target to select during the training block trials, they would be evaluated on their performance on all targets at the end of training.

Data analysis

All data processing and analyses were conducted using Matlab (Mathworks® Inc., Natick, MA, USA).

Task performance

The primary performance outcome measure was movement time, which was determined to be the time it took the cursor to leave the center of the screen and reach the target successfully i.e., stay inside the target for 500 ms. Reduction in movement time was an indication of improved task performance. Because our protocol was designed in a manner where participants could not proceed to the next target without reaching the prior target, no spatial error metrics were computed. A secondary performance measure was the normalized path length, which showed how quickly participants learned to make smooth, straight movements of the cursor to the target. The normalized path length was measured as the distance traveled by the cursor divided by the straight-line distance between the screen center and the target. Reduction in normalized path length would indicate straighter paths, with a value of 1 indicating a perfect straight line.

Strategy

Since the self-selected groups were given the freedom to choose the target(s) they wanted to practice on, and the Nudge group had the ‘difficult’ target made easier (by making it bigger in size), we quantified the strategy that participants used by (i) calculating the number of times they selected the ‘difficult’ target and (ii) calculating the probability of repeating a target (which examines the degree to which practice was ‘blocked’).

Statistical analysis

Training

To first establish that participants improved during training, we used a 2 x 3 (Block x Group) repeated measures ANOVA, where Block (Training blocks 1 & 8) was the within-subjects factor and Group (Random/Self-selected/Nudge) was the between-subjects factor.

Test

To assess structural learning, we used a 3 x 3 (Block x Group) repeated measures ANOVA separately on each of the performance outcome measures during the testing block. Block (Pre/Mid/Post) was the within-subjects factor, and Group was the between-subjects factor. For post hoc comparisons, we primarily focused on two comparisons related to our aims–(i) self-selected vs. random (to examine the effect of self-controlled strategy), and (ii) self-selected vs. nudge (to examine the effect of nudging). Violations of sphericity were corrected with the Greenhouse-Geisser correction when needed. Significance levels were set at P < 0.05. All statistical analyses were performed in JASP [35]. Because the focus was on the comparison between groups, we also report Bayes Factors for the group and group x block interactions in the ANOVA. We used the default prior settings in JASP and used the ‘analysis of effects’ table under the BFexclusion column, which indicates the change from prior to posterior exclusion odds [36].

Results

Data from three participants were removed from the data analysis due to incomplete data sets or errors in the calibration files. Therefore, the final sample size was 19 participants for the random group, 19 for the self-selected group and 21 for the nudge group.

Movement time

Training resulted in decreases in movement time, and a group difference. There was a significant main effect of block (F(1,56) = 92.74, P < .001), which indicated a decrease in movement time from the first to the last block, and a main effect of group (F(2,56) = 3.165, P = .050). Planned comparisons showed that the random group had longer movement times than the self-selected group (P = .017), but there were no differences between the self-selected and nudge groups (P = .415). The block x group interaction was not significant (F(2,56) = 2.720, P = .075). During testing, all three groups exhibited a reduction in movement time over the course of the experiment, but there were no group differences (Fig 2A). There was a significant main effect of block (F(1.051,58.849) = 99.6, P < .001). Post hoc tests using the Bonferroni correction showed that movement time reduced significantly (P < 0.001) across the three testing blocks. There was no significant main effect of group (F(2,56) = 0.016, P = .984), or block x group interaction (F(2.102,58.849) = 0.046, P = .96). Splitting the movement times by target direction showed similar trends in both the cardinal and diagonal directions (Fig 2B).
Fig 2

Movement time.

(A) Average movement time as a function of practice for the three groups. All groups decreased their movement time with practice but there were no statistically significant differences in movement time across the three groups during the test blocks. (B) Movement time in the test blocks split by target direction (cardinal/diagonal). Both directions showed similar changes with practice, indicating structural learning.

Movement time.

(A) Average movement time as a function of practice for the three groups. All groups decreased their movement time with practice but there were no statistically significant differences in movement time across the three groups during the test blocks. (B) Movement time in the test blocks split by target direction (cardinal/diagonal). Both directions showed similar changes with practice, indicating structural learning. To examine the strength of these null results, we also computed Bayes factors. The BFexclusion for the group effect was 13.587, and group x block interaction was 61.096, indicating that the data strongly support excluding these terms i.e., they had no effect.

Path length

Training resulted in decreases in path length, but no group differences. There was a significant main effect of block (F(1,56) = 53.63, P < .001), which indicated a decrease in path length from the first to the last block. The main effect of group (F(2,56) = 1.663, P = .199), and the block x group interaction (F(2,56) = 1.756, P = .182) were not significant. Similar to the movement time results, there was a decrease in path length i.e., cursor trajectories became straighter with practice during testing, but there were no group differences (Fig 3). There was a significant main effect of block (F(1.042,58.348) = 68.062, P < 0.001), indicating that movement trajectories became significantly straighter over the course of testing. The main effect of group (F(2,56) = 0.416, P = 0.662), and block x group interaction (F(2.084,58.348) = 0.183, P = 0.842) were not significant.
Fig 3

Normalized path length as a function of practice for the three groups.

Path length reduced significantly over the course of the experiment, indicating straighter paths. However, similar to the movement time results, there was no significant difference between groups during the test blocks.

Normalized path length as a function of practice for the three groups.

Path length reduced significantly over the course of the experiment, indicating straighter paths. However, similar to the movement time results, there was no significant difference between groups during the test blocks. To examine the strength of these null results, we also computed Bayes factors. The BFexclusion for the group effect was 10.728, and group x block interaction was 42.575, indicating that the data strongly support excluding these terms i.e., they had no effect.

Practice strategy in self-controlled groups

For the analysis of practice strategy which involved only the self-selected and nudge groups, we did not have full target sequence data from one participant in the self-selected group–therefore all analyses are reported for the remaining 39 participants (18 self-selected, 21 nudge). When we examined the probability of choosing the ‘difficult target’, we found that overall, both self-controlled groups showed lower than 25% probability of selection (Fig 4A), indicating that they tended to avoid the difficult targets (one sample t-test, P = .009 in blocks 1–4, P < .001 in blocks 5–8). There was a block x group interaction (F(1,37) = 7.010, P = .012). Analyses of the interaction showed that the Nudge group chose the ‘difficult’ target more often initially in learning and then decreased this frequency with practice, whereas the Self-selected group did not have a significant change in the frequency of the selection of difficult target with practice. The main effects of block (F(1,37) = 0.371, P = .546) and group (F(1,37) = 0.008, P = .928) were not significant.
Fig 4

Practice strategies used by the self-selected, and nudge groups.

(A) Practice of the difficult target. The Nudge group, which had its difficult target increased in size, showed a higher probability of practicing the difficult target relative to the Self-selected group early on in practice (Blocks 1–4), but this difference disappeared later in practice. (B) Number of repetitions during practice. Both Self-selected and Nudge groups showed increased repetition early in practice. However, the Self-selected group showed an increased tendency for blocked practice (i.e., larger number of repetitions) early in practice, but this changed in the later blocks of practice. (C) Correlation between frequency of repetitions (computed over all 8 blocks of practice) and the movement time on the post-test. A positive correlation indicated that more repetition during training (i.e., a more blocked practice schedule) was associated with higher movement times on the post-test.

Practice strategies used by the self-selected, and nudge groups.

(A) Practice of the difficult target. The Nudge group, which had its difficult target increased in size, showed a higher probability of practicing the difficult target relative to the Self-selected group early on in practice (Blocks 1–4), but this difference disappeared later in practice. (B) Number of repetitions during practice. Both Self-selected and Nudge groups showed increased repetition early in practice. However, the Self-selected group showed an increased tendency for blocked practice (i.e., larger number of repetitions) early in practice, but this changed in the later blocks of practice. (C) Correlation between frequency of repetitions (computed over all 8 blocks of practice) and the movement time on the post-test. A positive correlation indicated that more repetition during training (i.e., a more blocked practice schedule) was associated with higher movement times on the post-test. When we examined the structuring of practice (in terms of whether they chose a more ‘blocked’ or ‘random’ schedule), we found that overall, both self-controlled groups showed more repetitions than the random group (which had 0% by definition) (Fig 4B). There was a main effect of block (F(1,37) = 7.212, P = .011) indicating that participants tended to block practice more initially during practice (i.e., blocks 1–4) compared to later in practice (blocks 5–8). The main effect of group (F(1,37) = 0.813, P = .373) and the block x group interaction (F(1,37) = 3.208, P = .081) was not significant. Finally, to examine if the practice strategy in terms of target repetitions affected performance, we correlated the number of repetitions in all 8 training blocks to movement time in the post-test (Fig 4C). We found a positive correlation (r = 0.483, P = .002, 95% CI: [0.198 0.693]), indicating that more repetitions during practice (i.e., more blocked practice) was associated with increased movement time (i.e., lower task performance).

Discussion

The goal of the study was to address the role of self-controlled practice and nudging during structural learning of a novel task. Participants learned to control a novel interface which required motion of the upper body to move a screen cursor to different targets. Participants trained on a set of targets, and we examined structural learning during test phases that involved generalization to novel targets. We examined if (i) a self-selected practice schedule resulted in better learning compared to a random practice schedule where participants did not have control, and (ii) if nudging by adjusting task difficulty influenced learning relative to a self-selected strategy without nudging. For the first question, our results showed that although the self-controlled group exhibited shorter movement times early during training, there were no statistically significant differences between the random and self-controlled groups during the test conditions (which was our measure of structural learning). This was true both for the training and test targets, indicating that the groups did not differ either in retention or generalization. A possible explanation for these non-significant results in the post-test is that each group experienced a “floor effect” i.e., reduction in movement times had reached a limit by the end of training. However, we consider this explanation unlikely since the same pattern of results are seen even in the mid-test when the movement times were still decreasing. These results are somewhat inconsistent with a majority of experiments on self-controlled practice that have demonstrated beneficial learning effects [13,37]. A critical difference from these prior studies is that the current study focused on structural learning–i.e., practicing variations so that the focus was not simply on improving performance in the trained tasks, but also on learning the underlying structure in order to generalize to other targets. In contrast, prior studies on practice sequencing with self-controlled practice have typically employed different task variations, with no underlying rule or structure connecting these task variations [22,23]. In the context of structural learning, self-controlled practice may create a potential tradeoff–participants may tend to focus excessively on improving performance on the training targets (as indicated by the increased repetition and avoidance of the difficult targets in the Strategy analyses), however, this focus on short-term performance may result in more ‘blocked’ practice, which could negate some of the other benefits of self-controlled practice. Supporting this claim, we found a positive correlation between the number of repetitions and the final movement time on the post-test, indicating that participants who self-selected a more ‘blocked’ practice schedule showed worse task performance in the post-test. These results suggest that even when participants are given the overall learning goal (in this case, participants were told that they would be evaluated on all targets at the end of training) [34], self-controlled practice schedules may not always be optimal in terms of practice structure, especially in the context of learning novel tasks. Approaches such as ‘restricted’ self-control, where participants face a mix of self-controlled and experimenter-imposed conditions may provide the optimal learning environment in such cases [8]. For the second question, we used a Nudge group that was designed to follow a practice schedule similar to that of the self-selected group, but with the target sizes presented during the training blocks adjusted based on performance on the preceding testing block. Specifically, by making the more difficult targets appear easier (and vice versa), we anticipated that we could ‘nudge’ participants into achieving a more even distribution of repetitions across all targets; hence, addressing the issue of instance-based learning previously described. Results showed that the nudge worked as a manipulation–i.e., the Nudge group did successfully alter the strategy relative to the Self-selected group in terms of increasing the choice of the difficult target initially in learning. However, our results showed no reliable effect of this manipulation on any of the performance metrics relative to the Self-selected group which was not nudged. One reason for this null result might be that we only evaluated target difficulty twice during the entire practice schedule—at the onset of practice and at the halfway mark (i.e., at the pre-test and mid-test). A more frequent update of task difficulty (e.g., once per training block) may have been more effective to ensure that participants were practicing on the most difficult target for them at that time. Also, we adjusted target sizes by a fixed amount based simply on the rank-ordering of the Euclidean error (i.e., without considering the magnitude of the differences). Using a more sophisticated method—for e.g. by using Fitts’ law [38] to control the index of difficulty—may provide a better manipulation that is more uniform across participants. Given that the Nudge group had an effect on the strategy used, this strategy of ‘nudging’ participants toward specific choices deserves greater attention in future motor learning studies since control of the choice architecture provides a way to use the experimenter’s knowledge of optimal learning strategies and guide the learner toward better strategies while still retaining their autonomy. There are a few caveats that need to be addressed–first, we did not have a yoked group in this study which would have received the same order of targets as that chosen by the self-controlled groups. The yoked group is considered the standard control group in several self-controlled practice studies and allows for isolating the effect of ‘autonomy’; however, in the context of our research question being whether it is critical for the learner to have control over the practice sequence during learning, the appropriate control group is the random group which did not have control over the sequence. The utility of the yoked group as a control group arises only in cases where the self-control group outperforms the random group; this is because the yoked group can be used to distinguish if the benefit of self-control is due to the choice of a better practice sequence (in which case self-control should be similar to yoked) or the fact that the self-control group has autonomy (in which case self-control should be better than yoked). However, in the current study, there was no evidence of the self-control group outperforming the random group. In addition, from a practical standpoint, the random group serves a better control group because it would likely be the default practice schedule for learning this task. A second caveat is that our measures of learning were all within the same day from pre-test to post-test, similar to an ‘immediate’ retention test. Although it is possible that an immediate retention test is likely affected by ‘temporary’ effects indicative of a learning-performance distinction [39], these temporary effects usually differentially affect one group only when the manipulation has a drastic effect on performance (e.g., fatigue or guidance). In our case, the manipulation did not have any effects on performance even during learning, which makes it unlikely that temporary effects differentially affected one group. In any case, inference from the current work is primarily about short-term, ‘within-session learning’, and not about long-term retention or consolidation. A third caveat was that we did not have other measures of motivation or perceptions of competence [11,40], and so we have restricted our discussion mostly to task performance. In summary, we found that although self-controlled practice schedules had distinct effects on practice strategy, they did not provide any additional performance benefits relative to a random experimenter-determined practice schedule in a structural learning task. Understanding how to enhance structural learning of complex control interfaces may be a critical step in developing better practice schedules both for novel human-computer interfaces as well as for current assistive devices. (ZIP) Click here for additional data file. 17 Feb 2020 PONE-D-19-26775 Self-controlled practice and nudging during structural learning of a novel control interface PLOS ONE Dear Dr Lee, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As the handling academic editor, I thoroughly apologize for the delay in providing a response to your submission. Unfortunately, I was assigned the manuscript late in December and it proved to be exceedingly hard to find expert reviewers for this submission at this point in time. To avoid further delays, I am making my decision on the report of an expert in this area of investigation. The reviewer has several concerns that I agree with but I would like to give you the opportunity to reply to my and the reviewer's comments and criticisms. I will ask for the reviewer's opinion on your re-submission, should you choose to submit one. We would appreciate receiving your revised manuscript by Apr 02 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Welber Marinovic Academic Editor PLOS ONE Additional Editor Comments (if provided): It was not clear to me how the difficulty of the targets was equated in the nudge group. As I understand, the targets were made bigger or smaller by 25%, that choice was based on participant’s performance at pre-test or mid-test. This number (25%) seems rather arbitrary: was the error in average 25% larger for the most difficult target? The discussion does mention this issue but a stronger case could be built by collecting data for another study using the method described in the discussion. Some of the conclusions rely on the lack of statistical significance. I would recommend using either a parametric test of equivalence (e.g., Toster by Lakens) or providing Bayes Factors. Since you are using JASP, I would think it should be very easy to determine the strength of the null hypothesis. Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please ensure that you refer to Figure 4 in your text as, if accepted, production will need this reference to link the reader to the figure. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: # General comments The experiment aimed to investigate whether a practice condition controlled by the learner promotes benefits for a specific case of learning (structural learning). An experimenter imposed random practice schedule served as a control group for two self-controlled groups, one without and another with “nudge”. “Nudging” in the sensorimotor task used by the authors consisted of displaying larger targets in positions identified to be more difficult for the learners to “reach” (by controlling a virtual cursor). The introduction presents the problem clearly and the Method contains all information necessary to understand the experimental task and procedures. The statistical analysis is adequate and presented in detail, as is the results section. Nevertheless, I have some concerns related to the discussion of the findings. The crucial aspect for me is that the “random practice” condition does not allow to control for the effects of the learners autonomy (which is a limitation addressed by the authors in the discussion) and, most critical, this practice condition does not control for the effects of practice organization. Specifically, the practice organization “produced” by each learner as a result of their decision making during practice does not compare to the pseudo random practice performed by the random group. This aspect is critical because it changes important interpretations presented in the discussion. # Specific points: “A feature of self-controlled learning that may be problematic here is that although it may benefit autonomy, it typically reduces the random structure of practice because participants tend to engage in more ‘blocked’ practice by repeating targets (24).” “In the context of structural learning, self-controlled practice may create a potential tradeoff - participants may tend to focus excessively on improving performance on the training targets (as indicated by the increased repetition and avoidance of the difficult targets in the Strategy analyses), however, this focus on short-term performance may result in more ‘blocked’ practice, which could negate some of the other benefits of self-controlled practice.” I recommend that the authors search the literature concerning this point, since there is evidence showing that this behavior actually depends on the instruction provided (i.e. participants only “tend to engage in more ‘blocked’ practice” in the absence of instructions beyond the ones describing the task goal). ---- “The goal of the study was to address the role of self-controlled practice in a structural learning task.” I suggest rephrasing “structural learning task”, because the ambiguity may suggest that “structural learning” refers to an attribute of the task. ---- “One trivial possibility for these non-significant results is simply that any potential differences between groups was eliminated by a ‘floor effect’ in terms of the performance – i.e. movement times had reduced to a minimum possible limit by the end of training. However, we consider this unlikely as an explanation since the mid-tests (which were done in the middle of the training session) also showed the same patterns as the post-test.” This argument is not clear to me. I would recommend rewriting it. ---- “Approaches such as ‘restricted’ self-control, where participants face a mix of self-controlled and experimenter-imposed conditions may provide the optimal learning environment in such cases (8)” The task used in the cited study (8) was not similar to the one employed in the study under review. Considering the amount of studies investigating self-controlled sensorimotor learning not included in their paper - which, I thought, were not included because of the specificities of their task* -, the criteria used by the authors to select which research to cite is sometimes unclear. * “...these have been primarily used in rather well-learned tasks (such as key pressing or throwing) where the underlying schema may already be present through prior experience. In contrast, our focus in this study was to use a novel virtual task where the structure could only be learned through practice.” ---- “One reason for this null result might be that we only evaluated target difficulty twice during the entire practice schedule - at the onset of practice and at the halfway mark (i.e. at the pre-test and mid-test). A more frequent update of task difficulty (e.g., once per training block) may have been more effective to ensure that participants were practicing on the most difficult target for them at that time.” This would mean that any conclusions concerning "nudging" would be inappropriate, since the method did not allowed the adequate manipulation of the independent variable. If I'm mistaken, maybe the argument needs some adjustments. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. 4 Mar 2020 ***We have uploaded a more readable response to the reviewers as a separate file ---------------------------------------------------------------------------------------------------------------- Additional Editor Comments (if provided): It was not clear to me how the difficulty of the targets was equated in the nudge group. As I understand, the targets were made bigger or smaller by 25%, that choice was based on participant’s performance at pre-test or mid-test. This number (25%) seems rather arbitrary: was the error in average 25% larger for the most difficult target? The discussion does mention this issue but a stronger case could be built by collecting data for another study using the method described in the discussion. We thank the editor for this important comment. The choice of 25% was not based on the magnitude of the error, instead it was mainly selected so that it serve as a ‘nudge’ (i.e. a slightly larger target would increase the chance that it would be selected more often; however if the target were made very large it would no longer be a nudge as it essentially eliminates all other choices). Indeed, this manipulation was successful as we found that the Nudge influenced target selection. Therefore, in spite of this 25% being a pre-defined choice, it had the desired effect on the experiment. The comment about running an experiment using the method described in the discussion is in our future plans – we are evaluating different methods of nudging to develop this as a further question. However, given that this is a separate research question in itself, we think that this distracts from the main focus of the current manuscript. So, we plan to do this work in the future but are not including it in the current manuscript. Some of the conclusions rely on the lack of statistical significance. I would recommend using either a parametric test of equivalence (e.g., Toster by Lakens) or providing Bayes Factors. Since you are using JASP, I would think it should be very easy to determine the strength of the null hypothesis. We thank the editor for this point. We have now added Bayes factors to the central comparison for the main dependent variable (i.e. the main effect of Group and the Group x Block interaction in the Movement Time analyses). As seen from the results, the Bayes Factors (>10 in all cases) highly favor the null hypotheses (i.e. that the groups are not different), supporting the conclusions we had made. Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf We have now made all necessary formatting changes to conform to PLOS ONE’s style requirements as outlined in the links above. 2. Please ensure that you refer to Figure 4 in your text as, if accepted, production will need this reference to link the reader to the figure. We have now referred to Figure 4 in the relevant sections of the Results. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: # General comments The experiment aimed to investigate whether a practice condition controlled by the learner promotes benefits for a specific case of learning (structural learning). An experimenter imposed random practice schedule served as a control group for two self-controlled groups, one without and another with “nudge”. “Nudging” in the sensorimotor task used by the authors consisted of displaying larger targets in positions identified to be more difficult for the learners to “reach” (by controlling a virtual cursor). The introduction presents the problem clearly and the Method contains all information necessary to understand the experimental task and procedures. The statistical analysis is adequate and presented in detail, as is the results section. We thank the reviewer for the positive comments. Nevertheless, I have some concerns related to the discussion of the findings. The crucial aspect for me is that the “random practice” condition does not allow to control for the effects of the learners autonomy (which is a limitation addressed by the authors in the discussion) and, most critical, this practice condition does not control for the effects of practice organization. Specifically, the practice organization “produced” by each learner as a result of their decision making during practice does not compare to the pseudo random practice performed by the random group. This aspect is critical because it changes important interpretations presented in the discussion. We agree with the reviewer that we did not control for autonomy using a yoked group but this was not the central focus of the manuscript (we have noted this as a limitation and refrained from making statements about the effect of autonomy). However, the question of practice organization is the central focus here (i.e. is a self-selected sequence better for learning compared to a random sequence?). Therefore we are not sure what the reviewer means by “the practice condition does not control for effects of practice organization” – we did not control for this precisely because our goal was to examine if influencing the practice organization (random/self-selected/nudge) results in differences in learning. # Specific points: “A feature of self-controlled learning that may be problematic here is that although it may benefit autonomy, it typically reduces the random structure of practice because participants tend to engage in more ‘blocked’ practice by repeating targets (24).” “In the context of structural learning, self-controlled practice may create a potential tradeoff - participants may tend to focus excessively on improving performance on the training targets (as indicated by the increased repetition and avoidance of the difficult targets in the Strategy analyses), however, this focus on short-term performance may result in more ‘blocked’ practice, which could negate some of the other benefits of self-controlled practice.” I recommend that the authors search the literature concerning this point, since there is evidence showing that this behavior actually depends on the instruction provided (i.e. participants only “tend to engage in more ‘blocked’ practice” in the absence of instructions beyond the ones describing the task goal). We thank the reviewer for raising this point. We did a further literature search and found that in addition to one article we had cited, two more articles (Safir et al., 2013; Hodges et al., 2011) using the self-selected practice sequence found exactly the same finding (i.e. that participants engage initially in blocked practice and later transition to a more ‘random’ like practice schedule). We have now added these references in the text with the caveat that the typical interpretation of this blocked practice is ‘being beneficial’ to learning because it matches the challenge-point. We were unable to find an article that links this to the presence or absence of instructions. If the reviewer feels this is critical, we would appreciate it if the reviewer could point us to this study. ---- “The goal of the study was to address the role of self-controlled practice in a structural learning task.” I suggest rephrasing “structural learning task”, because the ambiguity may suggest that “structural learning” refers to an attribute of the task. We thank the reviewer for bringing our attention to this point. We have now re-phrased the sentence as follows: The goal of the study was to address the role of self-controlled practice and nudging during structural learning of a novel task. ---- “One trivial possibility for these non-significant results is simply that any potential differences between groups was eliminated by a ‘floor effect’ in terms of the performance – i.e. movement times had reduced to a minimum possible limit by the end of training. However, we consider this unlikely as an explanation since the mid-tests (which were done in the middle of the training session) also showed the same patterns as the post-test.” This argument is not clear to me. I would recommend rewriting it. We apologize for a lack of clarity in this statement. Basically, we wanted to state that the argument for a ‘floor effect’ (i.e. that movement times had saturated to a lower limit) does not seem likely as the results hold even during the mid-test (when the movement times have not reached a floor yet). We have revised it as follows: A possible explanation for these non-significant results in the post-test is that each group experienced a “floor effect” i.e. that reduction in movement times had reached a limit by the end of training. However, we consider this explanation unlikely since the same pattern results are seen even in the mid-test when the movement times were still decreasing. ---- “Approaches such as ‘restricted’ self-control, where participants face a mix of self-controlled and experimenter-imposed conditions may provide the optimal learning environment in such cases (8)” The task used in the cited study (8) was not similar to the one employed in the study under review. Considering the amount of studies investigating self-controlled sensorimotor learning not included in their paper - which, I thought, were not included because of the specificities of their task* -, the criteria used by the authors to select which research to cite is sometimes unclear. * “...these have been primarily used in rather well-learned tasks (such as key pressing or throwing) where the underlying schema may already be present through prior experience. In contrast, our focus in this study was to use a novel virtual task where the structure could only be learned through practice.” We thank the reviewer for the comment. We want to clarify that our main criteria for selecting papers was not the task itself, but that they manipulate the practice schedule (and not KR or other aspects which is the predominant manipulation in the literature). However, we have also cited a few original and review articles for the overall effects of self-controlled learning in the introductory paragraph. The cited paper (Andrieux et al., 2016) best highlights one of the weakness of self-controlled practice even though it was not in the context of a structural learning task. The authors in this paper state “More specifically, learners engaged in a complete self-controlled (SC) condition tended to adopt a more comfortable, lower nominal level of task difficulty throughout the acquisition phase in order to increase the number of intercepted targets (p. 61).” As far as we can tell, this ‘downside’ to self-controlled learning is not mentioned in earlier articles. ---- “One reason for this null result might be that we only evaluated target difficulty twice during the entire practice schedule - at the onset of practice and at the halfway mark (i.e. at the pre-test and mid-test). A more frequent update of task difficulty (e.g., once per training block) may have been more effective to ensure that participants were practicing on the most difficult target for them at that time.” This would mean that any conclusions concerning "nudging" would be inappropriate, since the method did not allowed the adequate manipulation of the independent variable. If I'm mistaken, maybe the argument needs some adjustments. We thank the reviewer for the comment. We want to emphasize the Nudge in our experiment did have the desired effect in terms of influencing the target selection. However, because it did not have any effect on learning, we suggest future study designs can use “real-time” nudging (by more frequently determining which is the more difficult target at each time point), and by adjusting the size in a more proportional way (say according to Fitts Law). We have now rephrased this point to make sure that what we are suggesting are ‘improvements’ to the Nudge (and not that it failed in the current experiment) Submitted filename: Response letter_v1.docx Click here for additional data file. 17 Mar 2020 PONE-D-19-26775R1 Self-controlled practice and nudging during structural learning of a novel control interface PLOS ONE Dear Dr Lee, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, I believe the paper can be accepted but there is one point that I would appreciate if the authors could check before I make a final decision. Therefore, I invite you to submit a revised version of the manuscript that addressing the point raised below. ============================== Unfortunately, due to an illness, the reviewer who provided comments on the original submission could not assess your rebuttal letter. Rather than seeking another reviewer, I took on the responsibility of assessing your answers myself. I think for the most part they are fine and I would have no further concerns. With regards to one point made by the Reviewer about instructions, I wondered whether she was referring to knowledge about the testing conditions. A few years back, I published a paper with colleagues showing that knowledge of how testing would occur after practice affects how they organized practice, improving learning (DOI: 10.1016/j.humov.2012.11.008). I honestly don’t know if the reviewer had this type of work on mind or something else, but if you find it relevant to your discussion then you may want to consider that work. Of course, citing it or not has no impact on the paper’s acceptance and the authors are free to ignore this suggestion if they believe it doesn’t tackle the reviewer’s concern or doesn’t improve the paper's narrative. I will be ready to make a quick decision upon receiving your revised manuscript. ============================== We would appreciate receiving your revised manuscript by May 01 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Welber Marinovic Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. 20 Mar 2020 We thank the Editor for this comment. In our experiment, we had given participants the “learning goal” (i.e. that they would eventually have to learn to move to all 8 targets) – so we think this makes the findings even stronger. We have now added this reference in the manuscript. At the end of the Methods: Participants in the Self-selected and the Nudge groups were given the learning goal [34], i.e. they were instructed that while they had choice on what target to select on trials during training, they would be evaluated on how well they could do all targets at the end of training. In the Discussion: These results suggest that even when participants are given the overall learning goal (in this case, participants were told that they would be evaluated on all targets at the end of training) [34], self-controlled practice schedules may not always be optimal in terms of practice structure, especially in the context of learning novel tasks. Submitted filename: Response letter_v2.docx Click here for additional data file. 24 Mar 2020 Self-controlled practice and nudging during structural learning of a novel control interface PONE-D-19-26775R2 Dear Dr. Lee, We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements. Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication. Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. With kind regards, Welber Marinovic Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 26 Mar 2020 PONE-D-19-26775R2 Self-controlled practice and nudging during structural learning of a novel control interface Dear Dr. Lee: I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. For any other questions or concerns, please email plosone@plos.org. Thank you for submitting your work to PLOS ONE. With kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Welber Marinovic Academic Editor PLOS ONE
  35 in total

1.  Physical assistance devices in complex motor skill learning: benefits of a self-controlled practice schedule.

Authors:  G Wulf; T Toole
Journal:  Res Q Exerc Sport       Date:  1999-09       Impact factor: 2.500

2.  Challenge point: a framework for conceptualizing the effects of various practice conditions in motor learning.

Authors:  Mark A Guadagnoli; Timothy D Lee
Journal:  J Mot Behav       Date:  2004-06       Impact factor: 1.328

3.  Influence of practice schedule on testing schema theory predictions in adults.

Authors:  T D Lee; R A Magill; D J Weeks
Journal:  J Mot Behav       Date:  1985-09       Impact factor: 1.328

4.  Allowing learners to choose: self-controlled practice schedules for learning multiple movement patterns.

Authors:  Will F W Wu; Richard A Magill
Journal:  Res Q Exerc Sport       Date:  2011-09       Impact factor: 2.500

5.  Self-controlled feedback is effective if it is based on the learner's performance.

Authors:  Suzete Chiviacowsky; Gabriele Wulf
Journal:  Res Q Exerc Sport       Date:  2005-03       Impact factor: 2.500

6.  Learning from the experts: gaining insights into best practice during the acquisition of three novel motor skills.

Authors:  Nicola J Hedges; Christopher Edwards; Shaun Luttin; Alison Bowcock
Journal:  Res Q Exerc Sport       Date:  2011-06       Impact factor: 2.500

Review 7.  Self-regulated learning: beliefs, techniques, and illusions.

Authors:  Robert A Bjork; John Dunlosky; Nate Kornell
Journal:  Annu Rev Psychol       Date:  2012-09-27       Impact factor: 24.137

8.  Prior knowledge of final testing improves sensorimotor learning through self-scheduled practice.

Authors:  Flavio Henrique Bastos; Welber Marinovic; Aymar de Rugy; Go Tani
Journal:  Hum Mov Sci       Date:  2013-02-26       Impact factor: 2.161

9.  Subject-controlled performance feedback and learning of a closed motor skill.

Authors:  C M Janelle; J Kim; R N Singer
Journal:  Percept Mot Skills       Date:  1995-10

Review 10.  Structure learning in action.

Authors:  Daniel A Braun; Carsten Mehring; Daniel M Wolpert
Journal:  Behav Brain Res       Date:  2009-08-29       Impact factor: 3.332

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.