Literature DB >> 33657913

eConsult Specialist Quality of Response (eSQUARE): A novel tool to measure specialist correspondence via electronic consultation.

Christopher Tran1,2, Douglas Archibald3,4, Susan Humphrey-Murto1,5, Timothy J Wood5, Nancy Dudek1, Clare Liddy3,4, Erin Keely1,2.   

Abstract

High-quality correspondence between healthcare providers is critical for effective patient care. We developed an assessment tool to measure the quality of specialist correspondence to primary care providers (PCPs) via electronic consultation (eConsult), where specialists provide advice without specialist-patient interactions. We incorporated fourteen previously described features of high-quality eConsult correspondence into an assessment tool named the eConsult Specialist Quality of Response (eSQUARE). Six PCPs and two specialists applied the 10-item eSQUARE tool to 30 eConsults of varying quality as informed by PCP survey data. Content, response process, and internal structure validity evidence was gathered. Psychometric properties were calculated using descriptive statistics and generalizability analyses. Mean total score for low-quality eConsults (M = 24 ± 5.6) was significantly lower than moderate-quality eConsults (M = 38 ± 4.7; p<0.001) which was significantly lower than high-quality eConsults (M = 46 ± 3.0; p = 0.002). Reliability measures were high, including generalizability coefficient (0.96), inter-item (≥0.55) and item-total correlations (≥0.68). A decision study demonstrated that a single rater was adequate to achieve a reliability measure of ≥0.70. This study demonstrates initial validity evidence including multiple reliability measures for the eSQUARE. A single rater is adequate to achieve reliability measures for formative feedback. Future studies can apply the eSQUARE when planning educational initiatives aiming to improve specialist-to-PCP correspondence via eConsult.

Entities:  

Keywords:  Electronic consultation; assessment; documentation; medical education; telehealth

Mesh:

Year:  2021        PMID: 33657913      PMCID: PMC9066665          DOI: 10.1177/1357633X21998216

Source DB:  PubMed          Journal:  J Telemed Telecare        ISSN: 1357-633X            Impact factor:   6.344


Introduction

Effective communication between specialists and primary care providers (PCPs) is essential for coordinated patient care. While improved communication may lead to favourable patient outcomes, poor inter-provider correspondence may cause delays in patient care and compromise relationships between clinicians.[1-3] An increasingly implemented method for communication among clinicians is electronic consultation (eConsult) – asynchronous communication through a secure electronic platform where PCPs can receive specialist advice without their patients meeting face-to-face with the specialist.[4-7] With more than 50 distinct services worldwide, eConsult is officially endorsed as a standard of practice by both primary care and specialist national regulatory bodies to improve PCP access to specialist advice. , As eConsult becomes more widespread, it is imperative to ensure effective communication when using eConsult. Although both PCPs and specialists report better inter-provider communication with eConsult when compared to conventional letter correspondence,[10-12] PCPs can also have negative experiences with specialist communication via eConsult, particularly if advice is neither clear nor actionable. , One way to improve the PCP experience would be to have users rate the quality of specialist correspondence using a formal assessment tool. Feedback generated from this tool could inform faculty development initiatives that aim to improve specialist-to-PCP communication via eConsult. Existing tools assessing inter-provider correspondence focus primarily on traditional reply letters following face-to-face encounters and do not capture elements unique to eConsult.[15-19] In eConsult, there is no direct patient–specialist interaction, and so the specialist relies entirely on the PCP to provide relevant details regarding history, physical examination and investigations. Also, the PCP is solely responsible for determining if eConsult advice should be implemented, further emphasising the importance of ensuring high-quality specialist communication so that advice can be followed as intended. This study is the second of a two-phase project aiming to develop and demonstrate initial validity evidence for a formal assessment tool to measure the quality of specialist communication via eConsult. The first phase used the nominal group technique (consensus group methodology) where an expert panel of 11 high-end eConsult users generated an initial list of 50 items which were then refined to 14 key elements of high-quality eConsult correspondence.

Methods

Setting

We conducted our study from 2019 to 2020 using eConsults submitted through our regional service (Champlain BASE™, Ottawa, Canada). Established in 2010, our eConsult service has completed more than 65,000 eConsults across 135 specialty groups.

Tool development

Our team of five physicians (four specialists and one PCP) and two health professions education researchers held a meeting to review and refine the 14 key elements for effective eConsult specialist communication, as reported previously. Items where authors felt there was overlap in content were combined. The authors also discussed how best to format the assessment tool, for example the number of points on each scale and their descriptive anchors. To gather sources of validity evidence for our assessment tool, we applied modern validity theory as a framework where five types of validity are postulated: content, internal structure, response process, relation to other variables and consequences. Our study addresses the first three features using a similar approach as other assessment tools.[22-24]

Defining eConsult quality

We sought a range in the quality of the eConsults to see if raters could discriminate low- from high-quality eConsults when applying the assessment tool. eConsults submitted between October 2016 and December 2017 were stratified into three quality groups: low, medium and high. Quality was inferred from PCP responses to mandatory close-out surveys where they rate on a five-point scale how helpful and/or educational the specialist response was in guiding ongoing patient management. PCPs also have the option to provide free-text comments. We classified an eConsult as low quality if the PCP assigned a helpfulness rating of 1 or 2 while providing specific comments describing why the eConsult was not helpful. Medium quality eConsults were ones with helpfulness ratings of 3 or 4 and absent or non-specific free-text comments. We defined a high-quality eConsult as one where the PCP assigned the maximum score of 5 for helpfulness and provided positive, detailed comments explaining why the eConsult was particularly helpful. Example free-text comments depicting each quality category are shown in Table 1.
Table 1.

Selection of low-, medium- and high-quality eConsults for eSQUARE testing based on PCP survey data and free-text comments.

eConsult qualitySurvey Q3aSurvey Q5bExample Q5 responses
Low1 or 2Negative comments specifically delineating why the eConsult was not helpful‘I would have appreciated a rationale for a particular course of action. Although I can follow directions, I won’t be better informed for next time’.‘Would appreciate more information about the length of course of both medications, how to administer topical medication, follow up and other options if not initial recommendations working’. ‘There was no relevant advice given in this consultation. No comments were made relating possible diagnosis, testing or treatment’.
Medium3 or 4Absent or non-specific comments‘Again, very helpful!’ ‘Thank you for this timely response. I will be implementing your advice’. ‘Many thanks for your helpful advice’.
High5Positive comments detailing why the eConsult was helpful‘This is an excellent response, clear and with steps to follow and when to refer clearly indicated’. ‘Fantastic response. Very thorough with excellent, detailed next steps. Really appreciate the time you took to prepare the response. It’s very helpful, and I can apply this to other patients as well’. ‘This was an excellent response, very helpful – gave me a specific plan, and when to refer to the specialist. Super helpful – thanks!’

aHow helpful and/or educational was this response in guiding your ongoing evaluation or management of the patient?

bWe would value any additional feedback you provide. [Comments for the specialist will be forwarded to her/him.]

eSQUARE: eConsult Specialist Quality of Response; PCP: primary care provider.

Selection of low-, medium- and high-quality eConsults for eSQUARE testing based on PCP survey data and free-text comments. aHow helpful and/or educational was this response in guiding your ongoing evaluation or management of the patient? bWe would value any additional feedback you provide. [Comments for the specialist will be forwarded to her/him.] eSQUARE: eConsult Specialist Quality of Response; PCP: primary care provider.

Sample size calculations

Although power analyses are conventional in quantitative research, one cannot be done for assessment tools before they are fully developed. Instead, sample size calculations during preliminary testing can be guided by the anticipated feasibility of using the tool and practical considerations such as finances and time. , We thus opted for 10 eConsults for each quality category (30 in total), and we aimed to recruit 10 raters to apply the assessment tool to each eConsult.

eConsult selection for testing

Among our cohort of 3324 eConsults, only 2% were assigned helpfulness ratings of 1 or 2; some specialties did not have any low-quality eConsults. No more than two eConsults were chosen from a given specialty, except the most popular specialty (dermatology) where three eConsults were included (one for each quality category). One author (C.T.) selected the 30 eConsults and presented them to the research group. We reached consensus that these were representative of the spectrum of quality across the most popular specialties. We removed all personal identifying information, including patient, PCP and specialist names.

Participants

We sent email invites to PCPs considered high-volume eConsult users, that is, those who submitted the above-median number of eight eConsults over a one-year period. Participants in the nominal group phase were excluded. We specifically recruited specialists in endocrinology and obstetrics/gynaecology, since they represent two of our four most frequently requested specialties; dermatology and haematology were the other top specialties, but they had already participated in the nominal group phase.

Rating process

The 30 de-identified eConsults, each containing the PCP’s clinical query and the specialist’s response, were provided to participants in a random order. We set up an online rating platform and provided PDF instructions on how to navigate the process, including descriptions of each item on the assessment tool to reduce any ambiguity (Appendix 1). We asked participants to do practice runs with the first few eConsults to familiarise themselves with the assessment tool. Once comfortable with the online platform, they then submitted their ratings. Participants were able to complete ratings in any order and in as many separate sessions as needed.

Statistical analysis

Descriptive statistics, including means for each item along with their standard deviations and ranges, were calculated by first averaging the ratings across raters. Total scores for each eConsult were calculated by summing the mean ratings received for each of the 10 items on the assessment tool. Any rating assigned as ‘not applicable’ (N/A) was replaced with the mean of ratings of remaining items assigned by the rater that had assigned the N/A rating. The internal structure of the tool was assessed using both inter-item correlations and item total score reliability; these were calculated using IBM SPSS Statistics for Windows v26 (IBM Corp., Armonk, NY). A generalisability study was felt appropriate for determining reliability measures of our assessment tool, since one can determine how much variance in scores is due to differences between eConsults versus others variables (raters, scale items) along with the interactions with each other. Repeated-measures analysis of variance (ANOVA) was conducted with raters, and eSQUARE items were treated as within-subject factors crossed with individual eConsults using G_String V and urGENOVA software platforms.[29-31] A decision study was conducted to explore how many raters would be needed to produce highly reliable ratings. To determine whether the assessment tool could differentiate among low-, medium- and high-quality eConsults, total scores were analysed by conducting a between-subjects ANOVA, with eConsult quality (low, medium and high) treated as a between-subjects factor. Parametric analyses such as ANOVA are robust and can maintain sufficient validity when applied to Likert scale data, even when some assumptions are violated. , We thus felt it was appropriate to apply parametric analyses to the sets of ordinal ratings across eSQUARE items.

Results

Items generated from the nominal group session where authors felt there was overlap in content were combined, for example ‘Educational: interaction is a learning experience’ was combined with ‘Separate advice for immediate action and additional material for future reference’, and ‘Openness to further communication, dialogue’ was combined with ‘respectful, supportive tone’. ‘Timeliness’ was removed, since response time is captured electronically by the eConsult platform. The result was a set of 10 items. The authors then reached consensus to use five-point rating scales and a global rating scale to balance rater convenience with assessment tool sensitivity. Scale anchors ranged from 1=‘not at all’ to 5=‘exemplary’. The assessment tool was named the eConsult Specialty Quality of Response (eSQUARE; Figure 1).
Figure 1.

The eConsult Specialist Quality of Response (eSQUARE) assessment tool.

The eConsult Specialist Quality of Response (eSQUARE) assessment tool.

Applying the eSQUARE to eConsults

Participants

Among the 295 PCPs who met the inclusion criteria, 11 agreed to participate; five family physicians and one nurse practitioner completed the study. The two invited specialists completed the study. Thus, a total of eight participants completed eSQUARE ratings for each of the 30 eConsults.

eConsults

When comparing eConsults grouped as low, medium and high quality based on PCP survey data, there was a statistically significant effect of quality (F(2, 27)=57.7; p<0.001, ηp2=0.81)), suggesting adequate variability among selected eConsults.

Descriptive statistics

Descriptive statistics for eSQUARE items rated on the five-point scale along with item total correlations and frequency N/A ratings are shown in Table 2. While mean ratings for each item were moderately high, raters were also willing to use the full scale, as seen by the range in ratings.
Table 2.

Descriptive statistics for the eSQUARE assessment tool.

eSQUARE item
Ratinga

Rangeb
Item total correlationN/A ratings
M SD MinMax
Current3.90.81.84.90.931
Educational, provides rationale3.71.11.15.00.960
Patient specific4.00.91.65.00.901
Addresses each question3.71.11.45.00.912
Specific recommendations3.71.11.85.00.951
Anticipatory guidance3.01.21.05.00.937
When face-to-face referral needed2.81.41.05.00.6819
Doable action items3.81.11.35.00.971
Clear, organised3.81.02.05.00.960
Professional, supportive3.71.21.05.00.930
Global rating3.51.21.04.9

aEach item was rated on a five-point scale, ranging from 1=‘not at all’ to 5=‘exemplary’.

bRange of actual ratings for each item. Non-integer values occurred when N/A ratings were replaced with mean ratings each specific rater assigned to remaining items.

Descriptive statistics for the eSQUARE assessment tool. aEach item was rated on a five-point scale, ranging from 1=‘not at all’ to 5=‘exemplary’. bRange of actual ratings for each item. Non-integer values occurred when N/A ratings were replaced with mean ratings each specific rater assigned to remaining items.

Internal structure measures

Item total score correlations were high, with all but one item having values >0.90. Inter-item correlations ranged from 0.44 to 0.94, with several items being highly correlated with one another (e.g. items 1–5, item 8 and item 10), indicating potential redundancy of items (Table 3). Since total scores and global rating scores were highly correlated (0.98), subsequent analyses focused only on scale items.
Table 3.

Item correlations for the eSQUARE assessment tool.

ItemDescription12345678910
1Current1.000.980.890.890.900.840.550.930.920.92
2Educational, provides rationale0.981.000.910.920.930.860.610.940.940.94
3Patient specific0.890.911.000.830.840.850.640.880.860.87
4Addresses each question0.890.920.831.000.900.840.610.910.910.85
5Specific recommendations0.900.930.840.901.000.880.630.960.970.91
6Anticipatory guidance0.840.860.850.840.881.000.830.910.880.86
7When face-to-face referral needed0.550.610.640.610.630.831.000.670.630.61
8Doable action items0.930.940.880.910.960.910.671.000.950.92
9Clear, organised0.920.940.860.910.970.880.630.951.000.92
10Professional, supportive0.920.940.870.850.910.860.610.920.921.00
Item correlations for the eSQUARE assessment tool. Variance components associated with administering the eSQUARE to eConsults are shown in Table 4. Differences among individual eConsults accounted for most of the variance in the scores (48%), suggesting that ratings varied from one eConsult to another. The next largest variance component other than overall error was the interaction between forms and raters (15%), indicating that raters were somewhat varied in the ratings they assigned to each eConsult. Individual raters accounted for only 1% of the overall variance, suggesting minimal differences across raters. Variance components associated with items tended to be low, that is, items were not a major source of variance. The G-coefficient for the scale with eight raters was 0.96, suggesting that scores from one rater can be generalised to another with high consistency.
Table 4.

eSQUARE variance components.

VariableVariance component% VarianceDescription
e 0.9648%Variance due to differences among eConsults
r 0.031%Variance due to differences among raters
i 0.126%Variance due to differences among eSQUARE items
er 0.3115%Variance due to rater inconsistency across eConsults
ei 0.179%Variance due to eConsult inconsistency across items
ri 0.042%Variance due to rater inconsistency across eSQUARE items
eri 0.3819%Overall error and variance due to the interaction between forms, raters and items

e: eConsult; r: raters; i: scale-rated items.

eSQUARE variance components. e: eConsult; r: raters; i: scale-rated items. To estimate the minimal number of raters needed to optimise score consistency and calculate inter-rater reliability, a decision study was performed (Appendix 2). A single rater applying all 10 eSQUARE items to assess an eConsult response would produce a reliability of 0.74; values >0.7 are considered adequate for formative feedback purposes.

Response process evidence

An aspect of response process evidence is that raters are consistent in how they apply their ratings. Post hoc t-tests revealed that the mean±standard deviation total score for low-quality eConsults (24±5.6) was significantly lower than moderate-quality eConsults (38±4.7; p<0.001) which in turn was significantly lower than high-quality eConsults (46±3.0; p=0.002), suggesting that raters using the eSQUARE could reliably distinguish between eConsults divided into three levels of quality.

Discussion

To ensure high-quality communication through emerging health-care innovations such as eConsult, tools to provide guidance, feedback and formal assessment of specialist-to-PCP communication are needed. Using items generated through rigorous consensus methodology and applying modern validity theory as a framework, our research team developed and tested the eSQUARE, a novel 10-item assessment tool, to assess the quality of eConsult replies. Our study provides sources of validity evidence for the eSQUARE, including content, response process and internal structure evidence. Having a group of medical education researchers with expertise in assessment tool development and psychometric analyses, plus a rigorous method of selecting items to ensure that eSQUARE elements represent the quality of eConsult specialist written communication, demonstrates content evidence. We infer that rater training was adequate, since participants did not ask for any clarification nor were any revisions required for the instructional PDF document, thus generating response process validity. , The eSQUARE was able to discriminate between eConsults rated as low, medium or high quality as defined in Table 1; having raters consistently rate eConsults in a similar matter builds further response process evidence. Our analyses found high reliability measures, including a G-coefficient of 0.96, indicating the eSQUARE could produce consistent scores and thus demonstrating internal structure evidence. , The eSQUARE performed well when compared to other written communication instruments. While we ultimately recruited only 8 out of our target of 10 raters, our D-study demonstrated that just a single rater assessing one eConsult was adequate to achieve a reliability measure of ≥0.70 with our sample of eConsults, a described benchmark for reliable formative assessment. This compares favourably to the Sheffield Assessment Instrument for Letters (SAIL), a tool assessing the quality of clinic letters written by specialist registrars, where three raters applying the SAIL to eight letters for a given registrar are required to achieve a reliability measure of 0.70. This has future implications, since requiring multiple raters and eConsults per specialist could impair the feasibility of widespread implementation of the eSQUARE. Our figures also align with literature describing convenience samples of clinical performance, where 7–11 observer ratings are needed to produce adequate generalisability data, for example a G-coefficient ≥0.80 as seen in our study. The eSQUARE can potentially be used to guide new specialists on eConsult services, as well as to provide feedback/quality control. Its ease of use and high reliability, limiting the number of raters required, make it well-suited to wide applicability. Our raters did not ask for any further clarification on how to use the eSQUARE, nor did the seek training beyond the single instructional document provided to them. This compares favourably to rater training described for other medical documentation assessment tools that describe one- to two-hour training sessions with the primary investigator. , We thus do not feel that formal training is required to use the eSQUARE, which should facilitate its widespread implementation. Our raters used the entire range of the five-point scales for each item (Table 2). Thus, there was no evidence of end-aversion or central-tendency biases where raters tend to avoid low scores or favour middle options. Our analysis included item total correlations, inter-item correlations and a generalisability study which demonstrated very high reliability measures. As these may indicate item redundancy, one could argue that the eSQUARE could be condensed further. For example, one could combine the items ‘Anticipatory guidance’ and ‘When face-to-face referral needed’, since both had the highest number of N/A ratings, the latter item had the lowest reliability measures and their inter-item correlation was highest compared to other items. These two items could be combined as:Since we intend for the eSQUARE to be formative assessment tool, we opted not to combine items further, despite high reliability measures. The remaining items represent those generated by formal consensus methods. Thus, each item may provide its own value when providing feedback to specialists while still achieving reasonable psychometric rigour for the eSQUARE. Anticipatory guidance including red flags (e.g. key features that would prompt further work-up), what to try next if recommendations do not result in a favourable outcome, and when a face-to-face referral would be indicated.

Study limitations

This was a single-centre Canadian study that only recruited users of the Champlain BASE™ eConsult programme and may not be generalisable to other eConsult platforms and providers. For example, eConsult specialists who share common electronic records with the PCP – a rare occurrence within our service – have resources to find information not originally presented by the PCP. Additional limitations may be exposed upon implementation of eSQUARE on a broader scale. Since we tested the eSQUARE using handpicked eConsults on a limited number of specialties, reliability measures for a random eConsult sample may be less robust. Also, since raters may have volunteered to participate in the study due to personal interest in eConsult assessment, ‘real-world’ raters who are not as engaged as study participants may produce lower levels of reliability when using the eSQUARE. While our survey data indicate that most eConsults are highly rated by PCPs, this may be driven by elements not captured by eSQUARE such as timeliness of response and reluctance to rate colleagues harshly.

Future directions

By using items that describe features of high-quality eConsults, the eSQUARE can inform educational interventions aiming to improve specialist-to-PCP communication via eConsult. One approach is audit and feedback where eSQUARE scores can be combined with other outcome data (e.g. PCP survey data, referral outcomes for patients) to measure how a single specialist or group of consulting physicians within a specialty are performing with respect to their peers. This information could help identify underperforming specialists for targeted education and could add to performance reports for the purpose of satisfying accreditation bodies. The eSQUARE tool can be included in eConsult educational activities for faculty or resident physicians (e.g. accredited workshops, seminars, short courses). eSQUARE scores can be used as outcome measures when assessing the impact of these educational interventions, similar to how other written communication assessment tools have demonstrated improvements in rating scores , and favourable changes in letter-writing behaviours.

Conclusions

Using modern validity theory as a framework, we have demonstrated initial validity evidence, including multiple reliability measures, for the novel eSQUARE tool. Since a single rater without extensive training can achieve adequate reliability measures for the purpose of formative feedback, the eSQUARE is a feasible tool for the assessment of the quality of specialist-to-PCP communication via eConsult. Our next steps include applying the eSQUARE to a larger sample of eConsults to build further validity evidence and incorporating the eSQUARE tool into educational interventions that aim to improve eConsult correspondence, both within our regional system and abroad.

Number of raters/eSQUARE
12345678
Number of scale items10.620.770.830.870.890.910.920.93
20.680.810.860.890.910.930.940.94
30.700.820.880.900.920.930.940.95
40.710.830.880.910.930.940.950.95
50.720.840.890.910.930.940.950.95
60.730.840.890.910.930.940.950.96
70.730.840.890.920.930.940.950.96
80.730.850.890.920.930.940.950.96
90.740.850.890.920.930.940.950.96
100.740.850.890.920.930.940.950.96
  38 in total

Review 1.  Considerations in determining sample size for pilot studies.

Authors:  Melody A Hertzog
Journal:  Res Nurs Health       Date:  2008-04       Impact factor: 2.228

2.  A Systematic Review of Asynchronous, Provider-to-Provider, Electronic Consultation Services to Improve Access to Specialty Care Available Worldwide.

Authors:  Clare Liddy; Isabella Moroz; Ariana Mihan; Nikhat Nawar; Erin Keely
Journal:  Telemed J E Health       Date:  2018-06-21       Impact factor: 3.536

3.  Promoting Responsible Electronic Documentation: Validity Evidence for a Checklist to Assess Progress Notes in the Electronic Health Record.

Authors:  Jennifer A Bierman; Kathryn Kinner Hufmeyer; David T Liss; A Charlotta Weaver; Heather L Heiman
Journal:  Teach Learn Med       Date:  2017-05-12       Impact factor: 2.414

4.  Referral and consultation communication between primary care and specialist physicians: finding common ground.

Authors:  Ann S O'Malley; James D Reschovsky
Journal:  Arch Intern Med       Date:  2011-01-10

5.  Development and Establishment of Initial Validity Evidence for a Novel Tool for Assessing Trainee Admission Notes.

Authors:  Danielle E Weber; Justin D Held; Roman A Jandarov; Matthew Kelleher; Ben Kinnear; Dana Sall; Jennifer K O'Toole
Journal:  J Gen Intern Med       Date:  2020-01-28       Impact factor: 5.128

6.  Unique Educational Opportunities for PCPs and Specialists Arising From Electronic Consultation Services.

Authors:  Erin J Keely; Douglas Archibald; Delphine S Tuot; Heather Lochnan; Clare Liddy
Journal:  Acad Med       Date:  2017-01       Impact factor: 6.893

7.  Impact of and Satisfaction with a New eConsult Service: A Mixed Methods Study of Primary Care Providers.

Authors:  Clare Liddy; Amir Afkham; Paul Drosinis; Justin Joschko; Erin Keely
Journal:  J Am Board Fam Med       Date:  2015 May-Jun       Impact factor: 2.657

8.  Enhancing quality of trainee-written consultation notes.

Authors:  Delphine S Tuot; Niraj L Sehgal; Naama Neeman; Andrew Auerbach
Journal:  Am J Med       Date:  2012-05-04       Impact factor: 4.965

9.  Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers.

Authors:  Todd D Reeves; Gili Marbach-Ad
Journal:  CBE Life Sci Educ       Date:  2016       Impact factor: 3.325

Review 10.  A systematic review of the use of theory in randomized controlled trials of audit and feedback.

Authors:  Heather L Colquhoun; Jamie C Brehaut; Anne Sales; Noah Ivers; Jeremy Grimshaw; Susan Michie; Kelly Carroll; Mathieu Chalifoux; Kevin W Eva
Journal:  Implement Sci       Date:  2013-06-10       Impact factor: 7.327

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.