| Literature DB >> 31770418 |
Rochelle E Tractenberg1, Jessica M Lindvall2, Teresa K Attwood3, Allegra Via4.
Abstract
As the life sciences have become more data intensive, the pressure to incorporate the requisite training into life-science education and training programs has increased. To facilitate curriculum development, various sets of (bio)informatics competencies have been articulated; however, these have proved difficult to implement in practice. Addressing this issue, we have created a curriculum-design and -evaluation tool to support the development of specific Knowledge, Skills and Abilities (KSAs) that reflect the scientific method and promote both bioinformatics practice and the achievement of competencies. Twelve KSAs were extracted via formal analysis, and stages along a developmental trajectory, from uninitiated student to independent practitioner, were identified. Demonstration of each KSA by a performer at each stage was initially described (Performance Level Descriptors, PLDs), evaluated, and revised at an international workshop. This work was subsequently extended and further refined to yield the Mastery Rubric for Bioinformatics (MR-Bi). The MR-Bi was validated by demonstrating alignment between the KSAs and competencies, and its consistency with principles of adult learning. The MR-Bi tool provides a formal framework to support curriculum building, training, and self-directed learning. It prioritizes the development of independence and scientific reasoning, and is structured to allow individuals (regardless of career stage, disciplinary background, or skill level) to locate themselves within the framework. The KSAs and their PLDs promote scientific problem formulation and problem solving, lending the MR-Bi durability and flexibility. With its explicit developmental trajectory, the tool can be used by developing or practicing scientists to direct their (and their team's) acquisition of new, or to deepen existing, bioinformatics KSAs. The MR-Bi is a tool that can contribute to the cultivation of a next generation of bioinformaticians who are able to design reproducible and rigorous research, and to critically analyze results from their own, and others', work.Entities:
Mesh:
Year: 2019 PMID: 31770418 PMCID: PMC6879125 DOI: 10.1371/journal.pone.0225256
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Mastery Rubric for Bioinformatics (MR-Bi).
| Performance Level: | Novice | Beginner | Apprentice | J1 Journeyman | J2 Journeyman |
|---|---|---|---|---|---|
| Reads, generally understands, but does not question, life science research (results). Beginning to recognize that “facts” are actually just the best-currently-supported theory. Limited engagement with uncertainty associated with “facts”; developing understanding of experimental design paradigms in biology, & own specific area of study. | Consolidates reading & understanding, beginning to learn how to analyze given biology problems (with software). Growing recognition that “facts” are typically the best-currently-supported theory. Engaging consistently with uncertainty associated with “facts”; deepening understanding of experimental design paradigms in biology, & own specific area of study. | Reads & understands; reliably identifies methods (software & programming) for given problems. Chooses & executes correct analysis, not necessarily able to identify several methods that could be equally viable, depending on given research objectives. Qualified as a fluent, but not as an independent, scientist who uses bioinformatics as a tool, but does not yet synthesize techonolgy with biology to generate new research problems. | Qualified as an independent scientist who uses bioinformatics methodologies as part of routine practice. Poses novel scientific questions, & identifies data & technology to align appropriate statistical/analytical methods to desired scientific objectives. Experienced reviewer of relevant technical features of available bioinformatics methods. Newly-independent expert in integrating bioinformatics technology/ techniques into novel research problems in their area of expertise. | Independent scientist who expertly integrates bioinformatics & more traditional methodologies, as needed, to achieve desired objectives & contribute to the body of knowledge. Expert reviewer of relevant technical features of available bioinformatics options. | |
| Exhibits respect for community standards/rules for public behavior & personal interaction. Learning how to recognize, & manifest respect for, intellectual property, professional accountability, & scientific contributions. | Learning to recognize “misconduct” in the scientific sense. Learning to avoid, & respond to, misconduct; & the importance of neither condoning nor promoting it. | Learning the principles of ethical professional & scientific conduct. Seeks guidance to strengthen applications of these principles in own practice. Learning how to respond to unethical practice. | Practices bioinformatics in an ethical way, & does not promote or tolerate any type of professional or scientific misconduct. Seeks guidance in how/when to take appropriate action when aware of unethical practices by others. | Practices, & encourages all others to practice, bioinformatics in an ethical way. Does not promote or tolerate any type of professional or scientific misconduct. Takes appropriate action when aware of unethical practices by others. | |
| Basic knowledge of biology; little-to-no awareness of the uncertainty inherent in experimental designs common in the life sciences. Thinking about the life sciences is based on uncritical acceptance of information as “factual”or “true“. | Advanced knowledge of biology, & basic knowledge of key bioinformatics methods. Very simple statistics/programs are run to answer pre-defined scientific questions. Learning to understand the uncertainty inherent in the scientific method, questions assumptions in the data & their relevance for given scientific problems (which arise from others). | Thinking about life sciences integrates both experimental & bioinformatics/technological sources for data & knowledge. Understands the uncertainty inherent in the scientific method, questions assumptions in the data & their relevance for given scientific problems (which typically arise from others, or with others). Experimental design & statistical inference are recognized & exploited with guidance, to answer given scientific problems. Can recognize inconsistencies in biological data/experiments that are identified by others, but cannot troubleshoot experimental methods independently. | Recognizes the importance of, & is able to critically evaluate, the relevant literature, & understands historical background of the relevant biological system(s). Sufficient knowledge of a biological system(s) to be able to draw functional conclusions from analytical results. Collaborates with experts to inform the next stages in the experimental design process (validating results, follow-up analyses, etc.). | Makes predictions to inform next stages of experimental design process. Evaluates relevant experimental methods that can be applied in any problem. Can generalize to other biological systems; independently solves biological problems that are innovative & move the field forward. | |
| Basic knowledge of computational methods; little-to-no awareness of the relevance of computational methods for life sciences. No awareness of experimental designs or how these can be used or implemented in computational applications Thinking about tools, computers, software, & programming is strictly uni-dimensional: i.e., extrapolation &/or abstraction of knowledge about computational methods to other systems, programs, or problems, are not possible. Can run software or execute code they are given (as appropriate) with precise instructions; cannot write a script or debug/troubleshoot. | Computers, software, tools, & programming are understood to be options for scientific work. Learning how to write & test code, run software, or use tools, as appropriate. Is developing awareness of the variety of bioinformatics tools, designs, & resources, but is not able to choose or apply the most appropriate of these for any given question; when choices are made, tools are used uncritically. Developing awareness that computational tools require input parameters, but uses the default settings. Learning to read, understand, troubleshoot, & make minor modifications to existing code/scripts. Does not synthesize results or outputs. | Learning to test software & programming approaches to different types of problem. Experimental design & statistical inference using computing & algorithms are recognized & applied, with guidance, to answer given scientific problems. Learning “best practices”for programming, if programming is part of the task. Can write basic code in a given language or run appropriate software, using judgement, but not inventing or innovating. Cannot troubleshoot complex computational methods–will ask for guidance. Exploring alternatives to default input parameters across computational tools. Can apply knowledge of tools to interpret their results & output. Seeks guidance in synthesis of results or outputs. | Recognizes the importance of, is able to critically evaluate, & understands historical background of the relevant data, databases, algorithms, tools, data analysis/statistical methods & computational resources. Can utilise these & justify trade-offs across methodologies (e.g., which statistical test to apply & what computational methods to use). Collaboratively synthesizes & critically questions analysis results & output from tools. Recognises the iterative nature of experiments (e.g., bench, data analysis, back to bench). Can write code/use tools to accomplish these, but collaborates with domain experts for identifying & articulating biological problems that are innovative & move the field forward. | Develops robust, well-documented, optimized, reproducible code &/or uses tools to address biological problems; moves away from standard procedures & innovates to accommodate new data types, tools, & techniques as needed. Can generalize to new coding languages or software/tools/ resources. | |
| Does not recognize life sciences as requiring integration of both experimental & computational/modeling approaches. Perceives disciplines as separate; integration only occurs when/as directed. Information, ideas & tools that are inter-disciplinary are used without question. | Beginning to think about life sciences as requiring integration of both experimental & computational/modeling approaches. Recognizes that interdisciplinarity is needed, but does not know how (or when) to do it, & requires direction. Learning the integrating process; learning strengths & weaknesses of biological & computational methods, but not sufficient to question assumptions from these & other disciplines. | Understands that life sciences integrate both experimental & computational/modeling approaches; seeks guidance about how & when to integrate. Developing an understanding of the strengths & weaknesses of biological & computational methods, beginning to question fundamental assumptions from these & other disciplines for any given scientific problem (which typically arises from others, or in conjunction with others). | Collaboratively integrates across relevant disciplines to address, & solve, innovative biological problems. Tests multiple avenues to triangluate solutions, with minimal guidance. Recognizes the roles of interdisciplinary teams in the research process, & the importance of integrating interdisciplinarity early on. Works effectively on interdisciplinary teams with minimal guidance. | Formulates innovative biological problems that require interdisciplinary solutions. Integrates methods & results to derive & contextualize solutions to biological problems. Consistently tests multiple avenues to triangluate solutions, while exploiting relevant findings from other disciplines. Actively builds interdisciplinary teams, as needed. | |
| Can recognize a problem that is explicitly articulated or concretely given, but cannot derive one. Unaware of the depth & breadth of “the knowledge base”that is or could be relevant for the formulation of a problem. Does not recognize design features or other evidence as the basis of/support for problem articulation. Does not recognize uncertainty or how this affects the formulation of solveable problems. | Developing awareness of the depth & breadth of “the knowledge base”that is or could be relevant for the formulation of a problem. Cannot differentiate gaps in own knowledge from gaps in “the knowledge base”. Developing the ability to recognize that uncertainty may have arisen in the formulation of solutions to problems. | Beginning to use, with guidance, the appropriate knowledge base to address a given problem. Recognizes the need to consider a wider scope of knowledge for alternative solutions to a problem common across contexts or domains. In guided critical reviews, learning to recognize that design features & evidence base are important to drawing conclusions. Recognizes the role of uncertainty in research, & that reproducibility & potential bias should be considered for every result. | Can explore & critically review the relevant knowledge base, & collaboratively articulate a problem based on that review. Reviews include assessment of relevance from (potentially) ancillary domains, bias, reproducibility, & rigor; recognizes when appropriate & inappropriate methodology is used. Recognizes when incomplete review is provided (by themselves or by others). Can discern reproducible from non-reproducible results; can identify major sources of bias throughout the knowledge base. | Independently defines & articulates a theoretical or methodological problems based on a critical review of the relevant knowledge base(s). Knows the hallmarks of questionable research hypotheses & misalignment of testing/statistics with poorly articulated research problems; consistently finds & identifies sources of bias. Articulates when appropriate & inappropriate methodology is used/reported. Critical review & problem articulation integrate diverse disciplinary perspectives when appropriate/adaptable. | |
| When directed, follows instructions to | When directed, uses the default settings of software, tools, or the GUI to test hypotheses in pre-planned analyses; does not generate testable hypotheses. Does not recognize that hypotheses may be generated & tested within the intermediate steps of an analysis. Developing the understanding that all methods involve assumptions. | With guidance, can: 1. leverage tools, software, data & other technologies (GUI/programming) to test hypotheses; & 2. generate hypotheses based on either the data or the technology, but not their combination/synthesis. Hypothesis generation possible in highly concrete & fully parameterized problems; developing the ability to identify whether a given hypothesis -including one of their own—is testable. Learning to recognize that experimental design & design of software/programming solutions include hypothesis generation to some extent. Developing the abilities to identify, & plan to address, assumptions that different hypotheses necessitate. | Collaboratively integrates hypothesis generation into the consideration of literature, data & analysis options. Seeks appropriate guidance in the synthesis of data & technology to generate novel, testable hypotheses. Considers the process of hypothesis generation & testing to be iterative when this is appropriate. Hypothesis generation is done with consideration of reproducibility & potential for bias, & takes into account the most clearly relevant literature; recognizes that less-obviously relevant literature may also be informative for hypothesis generation. | Independently generates testable hypotheses that are scientifically innovative as well as feasible (possible for economic reasons, time, impact, etc.). In own & others’ work, recognizes that, & articulates how, hypothesis generation from planned & unplanned analyses differ in their evidentiary weight & their need for independent replication. Fully explores all relevant knowledge base(s) to support rigor & reproducibility, & to avoid bias, in the generation of hypotheses. | |
| Can recognize concrete features of experiments only if they are described/given, and they match basic design elements (e.g., dependent, independent variables). Cannot design data collection or experiments. Unaware of covariates or their importance in analysis or interpretation. Does not recognize the importance of design, data collection, data quality, storage/access, analysis, & interpretation to promote rigor & reproducibility in experimental design. | Can identify salient features of experiments that are described/given if they match previously encountered design elements, but cannot derive them if they are not present. Recognizes covariates if mentioned, but does not require formal consideration (or justification) or evaluation of covariates. Does not recognize that one experiment alone cannot adequately address meaningful biological research problems. Understands that experimental design involves the identification, gathering, storing, analyzing, interpreting, & integrating of data & results. | Can match the correct data collection design to the instruments & outcomes of interest. May include/exclude covariates, or other design features, “because that is what is done”, without being able to justify their roles in the hypotheses to be tested. Developing the understanding that weak experimental design yields weak data & weak results. Needs assistance in conceptualizing covariates & their potential roles in the planned analyses. Beginning to recognize that, & can explain why, just one study is usually insufficient to answer a given research problems/solve biological problems adequately. Follows templates for the identification, gathering, storing, analyzing, interpreting & integrating of data. Learning to consider reproducibility & rigor in experimental design, & to question templates that do/do not include these concepts. | Recognizing that explicit attention to experimental design will result in more informative data; designs experiments in consultation with experts in content & statistics. These experiments may include power calculation considerations, if relevant; modeling requirements; measurement/sampling error & missing data. Collaboratively designs experiments that address the need for reproducibility & sensitivity analysis. Learning to conceptualize pilot studies & sensitivity analyses. Learning to adapt problems so that hypotheses can be generated & made testable via experiments. | Independently designs appropriate & reproducible experiments & other data-collection projects, using methodologies that are aligned with the testing of specific hypotheses. Consistently identifies & justifies choices of instruments & outcomes (& covariates if relevant). Collaborates with experts as needed on appropriate use of advanced methods, including accommodating measurement & sampling error, attrition (if needed) & modeling requirements; can adapt complex problems so that hypotheses can be generated & made testable via experiments. Understands & can exploit the strengths & weaknesses of experimental design, data & modeling approaches with respect to the biological problem under consideration. Uses pilot studies & sensitivity analyses appropriately. | |
| Uses data, as directed. Does not find relevant data; cannot describe what makes data or a given data resource “relevant”to a given problem. | Correctly uses data that are provided or can follow a script/“recipe”to obtain (access, manage) relevant data to which they are guided. Cannot determine whether a given data-set or type is relevant for a given problem, but is developing an awareness that not all data are equally relevant, or equally well suited, to all research problems. Developing awareness of the features of data/data resources that constitute “relevance“, & that these features must be assessed before choosing data to use. | Can initiate a search for data & will ask if uncertain about the relevance for any given problem. Learning how to identify, & evaluate strengths & weaknesses of, data resources, to determine whether a given data-set or -type is relevant for a given problem; &, with guidance, how to leverage these to address given research problems. Learning how reproducibility can be affected by the choice (& features) of data. | Collaboratively identifies relevant data resources. Understands the relative strengths & weaknesses of data-sets & -types for addressing their specific problem. Learning to address & formulate scientific problems (based on recognized gaps in the knowledge base) utilizing relevant data resources. In own & others’ work, recognizes that, & articulates how, choices for data (collection or use) require assumptions & justification, & must yield reproducible results. | Identifies data that are directly relevant to a problem of own or others’ devising. Consistently identifies, & evaluates strengths & weaknesses of, a variety of data resources that can address a problem or help to formulate it more clearly; recognizes if the necessary data do not yet exist. Justifies the relevance of any given data-set to a problem in terms of their individual strengths & weaknesses. Articulates hypotheses, & designs experiments, that leverage strengths in the data; includes triangulating data or results to address weaknesses in the data. Identifies whether data appropriate to the specific scientific question were used when reviewing proposals, protocols, manuscripts, &/or other documentation describing data, & research results. | |
| Uses methods that are provided & in a given order (i.e., a pipeline; & treats workflows | Uses given methods, as directed, & learning about the concepts of pipelines & workflows | Can identify methods, software, & pipelines that are relevant for a given problem; seeks guidance about the best approach. Learning to evaluate/rank & justify alternative methods in terms of general features of their efficiency & relevance for the given research problem. Beginning to recognize that a “pipeline”involves only the choice of which one(s) to use; while a “workflow”requires many choices & decisions. With guidance, seeks to identify & implement appropriate workflows to address given research problems. Learning how reproducibility can be affected by the choice & implementation of methods, including independent replication of essentially the same method vs. independent replication using diverse methods. | Collaboratively considers the knowledge base, & features of the relevant data & analysis options, in identifying the most appropriate approach(es) to tackle a scientific question. Uses appropriate analytic methods, pipelines, & workflows, recognizing, & taking advantage of the fact, that these may represent distinct approaches to the same problem. Knows when & how to control false discovery rates to promote reproducible results across methods. In own & others’ work, recognizes that, & articulates how, choices for methods, pipelines, & workflows require assumptions & justification, & must yield reproducible results. | Recognizes if/when the necessary methods, pipelines, & workflows to tackle a scientific question do | |
| Treats the output of a program as the final/complete result–with no interpretation required—& is unaware of the concepts of validation & cross-validation or their importance for interpretation of results/output. Uses the p-value to indicate “truth”in statistical analysis. Over-interpretation is typical. | Interpretation of results depends on | Seeks guidance to interpret results/output, including considerations of alignment of methods & results. Understands that the | Can discern, based on immediate results, methods & hypotheses, whether more experiments &/or data processing are required for robust result interpretation; collaboratively uses the appropriate knowledge base & data resources to interpret results; resists reification & is committed to good-faith efforts to falsify hypotheses. Consistently & appropriately uses false discovery rate controls. | Interprets own & others’ results critically & with respect to the analysis plan; seeks/promotes alignment of methods, results, & interpretation. Prioritizes interpretable & reproducible results above any other outcome (e.g., publication or completion of tasks/project), & insists on false discovery rate controls & other sensitivity analyses in all work. Avoids problems that can arise in the interpretation of results, including bias, reification, & other failures of positivism. Is able to evaluate the quality & appropriateness of procedures, statistical analyses, & models when reviewing papers & projects/ proposals, based on the writers’–& own—interpretation of results. | |
| Does not draw appropriate conclusions from given results; without direction, will not even contextualize conclusions with the protocol that was followed. Not aware of the difference between conclusions about the null hypothesis & those about the research hypothesis. Conclusions may over- or under-state results & be driven by | Learning fundamentals of how appropriate conclusions are drawn from results, but may not be able to draw those conclusions from given results themselves. Learning to differentiate between conclusions about the null hypothesis & those about the research hypothesis. Learning why | With guidance, can draw conclusions in own work that are coherent with the research hypothesis/hypotheses & across the entire manuscript/writeup (as appropriate). Learning to critically contextualize results; is able to draw the most obvious conclusions, but struggles to see patterns, or draw more subtle conclusions. Learning that “full”contextualization of conclusions requires consideration of limitations deriving from methods & their applications, & their effects on results & conclusions. Learning to recognize how independence of multiple methods applied to similar data/problems supports reproducible conclusions. | Can extract scientific meaning from data analysis & knows the difference between statistical & biological significance. In their own & others’ work, seeks competing, plausible alternative conclusions. Can judge the scientific importance of their results, & draws conclusions accordingly. Can draw conclusions & contextualize results with respect to an entire manuscript/writeup in a given project or study, or with literature (as appropriate). Can detect when conclusions are not aligned with other aspects of the work (e.g., introduction/ background, methods &/or results, or other experiments in the project). Gives careful consideration to limitations deriving from the method & its application in a specific study. Sees patterns, & perceives more subtle conclusions than earlier-stage scientists, & collaborates to fully articulate & motivate them. Writes the Discussion & Conclusions sections, including limitations, of own articles, with collaboration. | Expertly contextualizes results & conclusions with prior literature, across experiments or studies, & within any given document (e.g., manuscript, writeup, etc.). Strives to fully contextualize conclusions in own work, & also requires this in others’ work. Draws & contextualizes more subtle conclusions than at earlier stages. Can conceptualize new experiments based on the lack of robust &/or defensible conclusions in others’ work. Carefully considers consistency of conclusions with the other parts of own or others’ work. | |
| Does not communicate scientific information clearly or consistently; is unaware of community standards for scientific communication. Generally relies on lay summaries to support own communication; does not recognize that using original literature strengthens scientific communication. Does not differentiate appropriate & inappropriate scientific communication, nor understand the ethical implications of each. | Learning both to recognize the value of clear communication, & about the role of communication in sharing & publishing research, data, code, data management, tools & resources. Developing an awareness of community standards for scientific communication, & that these include documenting code, annotating data, & adding appropriate metadata. Does not adapt communication to fit the receiver. Learning to differentiate appropriate & inappropriate scientific communication, but does not yet understand that transparency in all communication represents ethical practice, | Understands the roles of sharing & publishing research, data, code, data management, tools & resources in scientific communication. Seeks guidance so that own communication is coherent, accurate, & consistent with community standards (e.g., following FAIR | Consistently & proficiently uses technical language to correctly describe what was done, why, & how. Sufficient consideration given to limitations, with explicit contextualization of results consistently included in the communication of results & their interpretation. Can adapt communication to fit the receiver; recognizes that sometimes communication must be consistent with community standards beyond their own discipline. Appropriately documents/annotates all data, code, tools, & resources for sharing, integration, & re-use. Understands that transparency in all communication represents ethical practice. | Is an expert communicator & reviewer of scientific communication; adheres to & promotes disciplinary standards for communication. Communicates in a manner that is consistent with standards across communities beyond their own discipline, as appropriate. Ensures communication is appropriate for a target audience, expertly adapting to fit the receiver(s). Communication is transparent, & appropriate to support reproducibility–& thereby, ethical practice—in every context. |
*Framework of the workflow supports decisions; workflow is not necessarily linear and can be multidirectional and iterative; any point can be re-iterated, or new starts from within the workflow can be made. A pipeline is unidirectional, not iterative within its structure (it is ballistic: once initiated, it runs), and has no decision points. Pipelines can exist within workflows, but workflows do not exist in pipelines.
‡ FAIR: Findable, Accessible, Interoperable, and Reusable.
Alignment of KSAs with competencies for bioinformatics and biomedical informatics.
| KSAs | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| COMPETENCIES: where Bloom’s cognitive taxonomy is clearly invoked, these are identified in italics | Prerequisite Knowledge-bio | Prerequisite Knowledge -comp | Interdisc Integr | Define a problem | Hypothesis generation | Experi-mental Design | Identify data | Identify and use methods | Interpret results/ output | Draw and context-ualize con-clusions | Commun-icate |
| 1. Ability to | x | x | x | x | |||||||
| 2. | x | ||||||||||
| 3. Ability to | x | ||||||||||
| 4. Ability to | x | x | x | x | |||||||
| 5. Ability to | x | x | x | x | |||||||
| 6. Ability to apply design & development principles in the | x | x | |||||||||
| 7. Ability to | x | x | x | x | x | x | x | x | x | x | x |
| 8. Ability to function effectively on teams to accomplish a common goal | x | x | |||||||||
| 9. | |||||||||||
| 10. Ability to communicate effectively with a range of audiences | x | x | |||||||||
| 11. Ability to | x | x | x | x | x | x | x | x | x | x | x |
| 12. Recognition of the need for, & ability to engage in, CPD | |||||||||||
| 13. Detailed | x | x | x | x | x | x | x | x | x | x | x |
| 14. Ability to | |||||||||||
| Acquire professional perspective: | x | x | x | x | x | x | |||||
| Analyze problems: | x | x | x | x | x | x | |||||
| Produce solutions: use the problem analysis to identify & understand the space of possible solutions & | x | x | x | x | |||||||
| Articulate the rationale: | x | x | x | x | x | x | |||||
| x | x | x | x | x | x | x | x | x | x | ||
| x | x | x | x | x | x | x | x | x | x | x | |
| Work collaboratively: team effectively with partners within & across disciplines | x | x | |||||||||
| Educate, disseminate & | x | x | |||||||||
| Prerequisite knowledge & skills: students must | x | ||||||||||
| Fundamental knowledge: | x | x | x | ||||||||
| – Biology: molecule, sequence, protein, structure, function, cell, tissue, organ, organism, phenotype, populations. | x | x | x | ||||||||
| – Translational and clinical research: genotype, phenotype, pathways, mechanisms, sample, protocol, study, subject, evidence, evaluation. | x | x | x | x | x | ||||||
| – Healthcare: screening, diagnosis (diagnoses, test results), prognosis, treatment (medications, procedures), prevention, billing, healthcare teams, quality assurance, safety, error reduction, comparative effectiveness, medical records, personalized medicine, health economics, information security and privacy. | x | x | x | x | x | x | |||||
| – Personal health: patient, consumer, provider, families, health promotion, personal health records. | x | x | x | ||||||||
| – Population health: detection, prevention, screening, education, stratification, spatio-temporal patterns, ecologies of health, wellness. | x | x | x | ||||||||
| For substantive problems related to scientific inquiry, problem solving & decision-making, | x | x | x | x | x | x | x | x | x | x | |
| x | x | x | x | x | x | ||||||
| x | x | x | x | x | x | x | x | x | x | x | |
| x | x | x | x | x | x | x | x | ||||
| Theories: | x | x | |||||||||
| Typology: | x | x | x | x | x | ||||||
| Frameworks: | x | x | |||||||||
| Knowledge representation: | x | x | x | x | |||||||
| Methods & processes: | x | ||||||||||
| Prerequisite knowledge & skills: assumes | x | ||||||||||
| Fundamental knowledge: | x | x | x | x | x | x | x | ||||
| – Imaging and signal analysis. | x | x | x | x | x | x | x | ||||
| – Information documentation, storage, and retrieval. | x | x | x | x | x | x | x | ||||
| – Machine learning, including data mining. | x | x | x | x | x | x | x | x | x | ||
| – Networking, security, databases. | x | x | x | x | x | x | x | x | |||
| – Natural language processing, semantic technologies. | x | x | x | x | x | x | x | x | |||
| – Representation of logical and probabilistic knowledge and reasoning. | x | x | x | x | x | x | x | x | |||
| – Simulation and modeling. | x | x | x | x | x | x | x | x | x | x | |
| – Software engineering. | x | x | x | x | x | x | x | x | x | x | |
| Procedural knowledge & skills: for substantive problems, | x | x | x | x | x | x | x | ||||
| x | x | x | |||||||||
| x | x | ||||||||||
| Prerequisite knowledge and skills: Familiarity with fundamentals of social, organizational, cognitive, and decision sciences. | x | ||||||||||
| – Social, behavioral, communication, and organizational sciences: for example, computer supported cooperative work, social networks, change management, human factors engineering, cognitive task analysis, project management. | x | x | |||||||||
| – Ethical, legal, social issues: for example, human subjects, HIPAA, informed consent, secondary use of data, confidentiality, privacy. | x | x | |||||||||
| – Economic, social and organizational context of biomedical research, pharmaceutical and biotechnology industries, medical instrumentation, healthcare, and public health. | x | x | |||||||||
|
– | x | x | x | x | x | x | x | x | x | x | x |
| – Analyze complex biomedical informatics problems in terms of people, organizations, and socio-technical systems. | x | x | |||||||||
| – | x | x | x | x | |||||||
| – | x | x | x | x | x | ||||||
| – | x | x | x | x | |||||||
| – | x | x | x | x | x | x | x | ||||
x = the indicated KSA must be applied at some level (of Bloom’s taxonomy) in order to accomplish the competency.
‡ It is unclear whether “computing requirements” in Bioinformatics competency # 3 refers to simple computer hardware and software, or also includes methods, techniques and other–more challenging to identify & utilize–aspects of the requirements “appropriate to its solution”. If this competency refers to an individual’s minimal assessment of “appropriate”, and if the problem for which computing is required has already been defined, then just one KSA (prerequisite knowledge, computing) is required. If the competency refers to an individual’s evaluation of competing solutions, particularly if the problem for which the computing is required has not been defined a priori, then multiple KSAs are required. Neither the 2014 nor 2018 statements of the Bioinformaticscompetencies supports a distinction between these options.
§ Recognizing a need for a separate KSA in the MR-Bi that captures “ethical practice” was partly driven by Bioinformatics competency #9, but competency #9 is insufficiently specific to fully align KSAs. The possession of an understanding of ethical obligations, for example, do not translate to ethical practice, so even that KSA cannot be aligned with this competency. Similarly, Bioinformatics competency #11 suggests that "analysis" is sufficient to demonstrate the competency but this is incorrect. The ability to carry out this analysis also does not translate to ethical practice.
* no actionable verbs relating to the learner are included in the articulation of this competency or competency domain, so no KSAs could be justifiably aligned with this competency. More specifically, because there is no indication of what a learner needs to do to demonstrate this competency/competency domain (Bioinformatics competencies #9 & #12; medical informatics competency "human and social context" competencies), it cannot be taught or assessed in any systematic way. While the competencies are important, they are insufficiently specific to confidently align with any particular KSAs.
** This competency is insufficiently specific for determination of whether any KSAs are needed to achieve it. Because there is no indication of what a learner needs to do to demonstrate this competency/competency domain, it cannot be taught or assessed in any systematic way.
∞ This competency could be demonstrated at either the “apply” (low Bloom’s level/few KSAs required), or at a considerably higher level. The level required to select, use, and interpret “statistical research methods”, which as stated, is what doctoral level statisticians are trained to deploy (high Bloom’s level/all KSAs required). The low-Bloom’s level interpretation would not be contextualized in any of the specific research domains *by the performer*, while the higher level performer might actually specialize in just one of the contexts (e.g., biology or computation). Therefore, this competency is insufficiently specific for determination of which KSAs are needed to achieve it.
How the principles of andragogy would be met with a Mastery Rubric like the MR-Bi.
Table adapted from [57], Table 4, with permission.
| Principle of andragogy | Met with a Mastery Rubric? | |
|---|---|---|
| 1 | The Mastery Rubric is | |
| 2 | The Mastery Rubric allows learners to recognize and demonstrate where they are (how they perform) in their development, and to motivate their acquisition of, and organize, new knowledge. This enables them to leverage their prior knowledge and experience, supporting self-direction along the articulated trajectory. | |
| 3 | The goals of learning are expressed in all three dimensions (KSAs, developmental stages, PLDs) of a Mastery Rubric. PLDs present concrete and recognizable targets that learners can see how to achieve. Courses and series of courses can be selected by learners (and aligned with PLDs by instructors) to ensure accomplishment of curriculum or course goals. | |
| 4 | The KSAs make explicit what learners need to learn to support both current and future practice. The trajectory offers justification for ongoing learning, while the PLDs describe readiness for the next learning goal. Together, these provide learners with a rationale for commitment and engagement throughout a curriculum. | |
| 5 | The Mastery Rubric supports instruction and curriculum design that emphasize authenticity and transferability of new knowledge (e.g., to new contexts) through: 1) KSAs that are relevant; 2) trajectories that support evolution through recognizable and practical stages; and 3) PLDs that represent the development of observable behaviors. Together, these three dimensions facilitate learners’ selection of curricula, programs, or courses that support both current and future practice in the domain. | |
| 6 | The Mastery Rubric is intended to function as a |