| Literature DB >> 35499859 |
Kathryn Cowie1, Asad Rahmatullah1, Nicole Hardy1, Karl Holub1, Kevin Kallmes1.
Abstract
BACKGROUND: Systematic reviews (SRs) are central to evaluating therapies but have high costs in terms of both time and money. Many software tools exist to assist with SRs, but most tools do not support the full process, and transparency and replicability of SR depend on performing and presenting evidence according to established best practices.Entities:
Keywords: feature analysis; software tools; systematic reviews
Year: 2022 PMID: 35499859 PMCID: PMC9112080 DOI: 10.2196/33219
Source DB: PubMed Journal: JMIR Med Inform
Features from systematic reviews not assessed in this review, with rationale.
| Features not assessed | Rationale |
| Functional | Part of our inclusion criteria |
| Reference allocation | Reference management excluded from this review |
| Randomizing order of references | Not part of systematic review process |
| Non-Latin character support | Review focused on English language systematic review software |
| Straightforward system requirements | Part of our inclusion criteria |
| Installation guide | Not necessary for web-based software |
| No coding | Part of our inclusion criteria |
| Mobile- or tablet-responsive interface | Not necessary for web-based software |
| Other stages | Not a discrete or comparable step |
| Multiple projects | Not part of the systematic review process |
| Work allocation | Duplicated with “distinct user roles” |
| Export of decisions | Duplicated with export |
| User setup | Duplicated with “distinct user roles” |
| Filter references | Duplicated with screening records |
| Search references | Duplicated with “database search” |
| Insecure website | Information not available to reviewers |
| Security | Information not available to reviewers |
| Setting up review | Not a discrete or comparable step |
| Automated analysis | Not a discrete or comparable step |
| Text analysis | Not part of the systematic review process |
| Report validation | Not part of the systematic review process |
| Document management | Reference management excluded from this review |
| Bibliography | Reference management excluded from this review |
The criteria for each selected feature, as well as the rationale.
| Classification and variable name and coding | Feature from | Rationale (if added by authors) | ||
|
| ||||
|
| Database search | 1—literature search through APIa Integration with a database; 0—no method for retrieving studies directly from a database | Kohl et al [ | —b |
|
| Reference importing | 1—import of references as RISc files or other file types; 0—references have to be entered manually | Harrison et al [ | — |
|
| Manual addition | 1—add a reference by entering study metadata; 0—no method for adding individual references and gray literature | Added by the authors | Ability to add expert additions is called for by the PRISMAd 2020 guidelines and checklist [ |
|
| Attaching full-text PDFs | 1—ability to import or upload full-text PDFs associated with each study under review; 0—no method for importing full-text PDFs in the screening process | Harrison et al [ | — |
|
| Automated full-text retrieval | 1—ability to fetch some or all full texts via API or other nonmanual method; 0—full texts must be uploaded manually, or full-text upload not supported | Added by the authors | Full texts are required for content extraction, and manual upload represents a major time investment by the user |
|
| ||||
|
| Title/abstract screening | 1—inclusion and exclusion by title and abstract only; 0—no system for inclusion and exclusion of references by title and abstract | Harrison et al [ | — |
|
| Full-text screening | 1—a distinct full-text screening phase; 0—there is no full-text screening phase | Harrison et al [ | — |
|
| Dual screening and adjudication | 1—choice for single or double screening and a method for resolving conflicts; 0—no ability to configure screening mode or no ability to resolve conflicts | Harrison et al [ | — |
|
| Keyword highlighting | 1—abstract keywords are highlighted. Keywords can be user or AIe-determined; 0—No keyword highlighting is possible | Harrison et al [ | — |
|
| Machine learning/automation (screening) | 1—has a form of machine learning or automation of the screening process; 0—does not support any form of machine learning or automation of the screening process | Added by the authors | Automated screening has been called for by the scientific community [ |
|
| Deduplication of references | 1—automatically identifies duplicate references or marks potential duplicates for manual review; 0—has no mechanism for deduplication | Harrison et al [ | — |
|
| ||||
|
| Tagging references | 1—ability to attach tags that reflect the content of underlying studies to specific references; 0—no means for attaching content-related tags to references | Van der Mierden et al [ | — |
|
| Data extraction | 1—facilitates extraction and storage of quantitative data into a form or template; 0—does not permit extraction and storage or quantitative data | Harrison et al [ | — |
|
| Dual extraction | 1—ability for 2 independent reviewers to collect on each study and for a third person to adjudicate differences; 0—no ability to have independent extraction and adjudication | Added by the authors | Dual extraction improves the accuracy of data gathering [ |
|
| Risk of bias | 1—supports critical appraisal of studies through risk of bias assessments; 0—no built-in features or templates to assess risk of bias | Kohl et al [ | — |
|
| ||||
|
| Flow diagram creation | 1—automated or semiautomated creation of PRISMA flow diagrams; 0—the tool cannot automatically provide a flow diagram meeting the PRISMA criteria | Van der Mierden et al [ | — |
|
| Manuscript writing | 1—ability to write or edit a report or manuscript; 0—no ability to write or edit a report or manuscript | Marshall et al [ | — |
|
| Citation management | 1—ability to insert citations based on stored study metadata into a text editor; 0—no ability to insert citations into a document | Added by the authors | The ability to add and manage citations is necessary to document the source of review data |
|
| Data visualizations | 1—generation of figures or tables to assist with data presentation; 0—no built-in way to generate figures or tables | Kohl et al [ | — |
|
| Export | 1—supports export of references, study metadata, or collected data; 0—has no export feature | Harrison et al [ | — |
|
| ||||
|
| Protocol | 1—supports protocol development or filling in a research question template; 0—no protocol development or templates | Kohl et al [ | — |
|
| Distinct user roles | 1—distinct user roles and permissions; 0—no distinct roles; everybody has the same role and rights in the project | Harrison et al [ | — |
|
| Activity monitoring | 1—software monitors and displays progress through the project; 0—there is no way to determine overall progress of the project (eg, % completed) | Harrison et al [ | — |
|
| Comments or chat | 1—ability to leave comments or notes on studies; 0—it is not possible to attach comments to references | Van der Mierden et al [ | — |
|
| Training | 1—there are publicly available web-based tutorials, help pages, training videos, or forums maintained by the software provider; 0—there are no accessible tutorials or training materials maintained by the software provider | Harrison et al [ | — |
|
| Customer support | 1—customer support, such as support contact information, is provided on request; 0—customer support is not clearly available | Van der Mierden et al [ | — |
|
| ||||
|
| Pricing (free to use) | 1—a free version is available for users; 0—the tool must be purchased, or free or trial accounts have severe limitations that can compromise the systematic review | Harrison et al [ | — |
|
| Living/updatable | 1—new records can be added after a project has been completed; 0—new records cannot be added after a project has been completed | Added by the authors | Living systematic review has been called for as a novel paradigm solving the main limitation of systematic review [ |
|
| Public outputs | 1—web-based visualizations or writing can be made publicly visible; 0—review data and outputs cannot be made publicly visible | Added by the authors | Web-based availability of systematic review outputs is important for transparency and replicability of research [ |
|
| User collaboration | 1—multiple users can work simultaneously on 1 review; 0—it is not possible for multiple users to work at the same time on the same project, independently | Harrison et al [ | — |
aAPI: application programming interface.
bRationale only provided for features added in this review; all other features were drawn from existing feature analyses of Systematic Review Software Tools.
cRIS: Research Information System.
dPRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
eAI: artificial intelligence.
Figure 1PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)-based chart showing the sources of all tools considered for inclusion, including 2-phase screening and reasons for all exclusions made at the full software review stage. SR: systematic review.
Breakdown of software tools for systematic review by process type (full process, screening, extraction, or visualization; n=24).
| Type | Tools, n (%) | Software tools |
| Full process | 15 (63) | Cadima, Covidence, Colandr, DistillerSR, EPPI-Reviewer Web, Giotto Compliance, JBI SUMARI, LitStream, Nested Knowledge, PICOPortal, Revman Web, SRDB.PRO, SRDR+, SyRF, SysRev |
| Screening | 5 (21) | Abstrackr, Rayyan, RobotAnalyst, SWIFT-Active Screener, SR Accelerator |
| Extraction | 3 (13) | Data Abstraction Assistant, RobotReviewer, SRDR |
| Visualization | 1 (4) | COVID-NMA |
Feature assessment scores by feature class for each systematic review tool analyzed.
| Systematic review tool | Retrieval (n=5), n (%) | Appraisal (n=6), n (%) | Extraction (n=4), n (%) | Output (n=5), n (%) | Admin (n=6), n (%) | Access (n=4), n (%) | Totala (n=30), n (%) |
| DistillerSR | 5 (100) | 6 (100) | 3 (75) | 4 (80) | 6 (100) | 2 (50) | 26 (87) |
| Nested Knowledge | 4 (80) | 5 (83) | 2 (50) | 5 (100) | 6 (100) | 3 (75) | 25 (83) |
| EPPI-Reviewer Web | 4 (80) | 6 (100) | 4 (100) | 3 (60) | 5 (83) | 2 (50) | 24 (80) |
| Giotto Compliance | 4 (80) | 6 (100) | 3 (75) | 3 (60) | 5 (83) | 2 (50) | 23 (77) |
| LitStream | 2 (40) | 5 (83) | 3 (75) | 3 (60) | 6 (100) | 3 (75) | 22 (73) |
| SRDB.PRO | 5 (100) | 4 (67) | 2 (50) | 3 (60) | 6 (100) | 1 (25) | 21 (70) |
| Covidence | 3 (60) | 5 (83) | 4 (100) | 2 (40) | 5 (83) | 1 (25) | 20 (67) |
| JBI SUMARI | 3 (60) | 4 (67) | 2 (50) | 4 (80) | 5 (83) | 2 (50) | 20 (67) |
| SysRev | 4 (80) | 3 (50) | 2 (50) | 2 (40) | 5 (83) | 3 (75) | 19 (63) |
| Cadima | 2 (40) | 5 (83) | 3 (75) | 2 (40) | 4 (67) | 2 (50) | 18 (60) |
| Colandr | 4 (80) | 6 (100) | 1 (25) | 2 (40) | 3 (50) | 2 (50) | 18 (60) |
| PICOPortal | 2 (40) | 6 (100) | 2 (50) | 2 (40) | 3 (50) | 3 (75) | 18 (60) |
| Rayyan | 3 (60) | 5 (83) | 2 (50) | 2 (40) | 4 (50) | 2 (50) | 18 (60) |
| SRDR+ | 2 (40) | 3 (50) | 3 (75) | 1 (20) | 6 (100) | 3 (75) | 18 (60) |
| Revman Web | 2 (40) | 1 (17) | 2 (50) | 3 (60) | 6 (100) | 2 (50) | 16 (53) |
| SWIFT-Active Screener | 3 (60) | 6 (100) | 0 (0) | 1 (20) | 5 (83) | 1 (25) | 16 (53) |
| Abstrackr | 1 (20) | 5 (83) | 1 (25) | 1 (20) | 5 (83) | 2 (50) | 15 (50) |
| RobotAnalyst | 2 (40) | 3 (50) | 0 (0) | 2 (40) | 5 (83) | 2 (50) | 14 (47) |
| SRDR | 1 (20) | 0 (0) | 2 (50) | 2 (40) | 5 (83) | 3 (75) | 13 (43) |
| SyRF | 1 (20) | 4 (67) | 2 (50) | 1 (20) | 2 (33) | 2 (50) | 12 (40) |
| Data Abstraction Assistant | 2 (40) | 0 (0) | 1 (25) | 0 (0) | 3 (50) | 3 (75) | 9 (30) |
| SR-Accelerator | 2 (40) | 4 (67) | 0 (0) | 0 (0) | 2 (33) | 1 (25) | 9 (30) |
| RobotReviewer | 2 (40) | 0 (0) | 2 (50) | 1 (20) | 2 (33) | 1 (25) | 8 (27) |
| COVID-NMA | 0 (0) | 0 (0) | 0 (0) | 2 (40) | 1 (17) | 2 (50) | 5 (17) |
aThe total number of features offered by each software across all feature classes, presented in descending order.
Figure 2Stacked bar chart comparing the percentage of supported features, broken down by their feature class (retrieval, appraisal, extraction, output, admin, and access), among all analyzed software tools.
Figure 3Heat map of features observed in 24 analyzed software tools. Dark blue indicates that a feature is present, and light blue indicates that a feature is not present.