| Literature DB >> 33948236 |
Heidi A Hanson1,2, Claire L Leiser1,3, Gretchen Bandoli4, Brad H Pollock5, Margaret R Karagas6, Daniel Armstrong7, Ann Dozier8, Nicole G Weiskopf9, Maureen Monaghan10,11, Ann M Davis12,13, Elizabeth Eckstrom14, Chunhua Weng15, Jonathan N Tobin16,17, Frederick Kaskel18, Mark R Schleiss19, Peter Szilagyi20, Carrie Dykes21, Dan Cooper22, Shari L Barkin23.
Abstract
Life course research embraces the complexity of health and disease development, tackling the extensive interactions between genetics and environment. This interdisciplinary blueprint, or theoretical framework, offers a structure for research ideas and specifies relationships between related factors. Traditionally, methodological approaches attempt to reduce the complexity of these dynamic interactions and decompose health into component parts, ignoring the complex reciprocal interaction of factors that shape health over time. New methods that match the epistemological foundation of the life course framework are needed to fully explore adaptive, multilevel, and reciprocal interactions between individuals and their environment. The focus of this article is to (1) delineate the differences between lifespan and life course research, (2) articulate the importance of complex systems science as a methodological framework in the life course research toolbox to guide our research questions, (3) raise key questions that can be asked within the clinical and translational science domain utilizing this framework, and (4) provide recommendations for life course research implementation, charting the way forward. Recent advances in computational analytics, computer science, and data collection could be used to approximate, measure, and analyze the intertwining and dynamic nature of genetic and environmental factors involved in health development. © The Association for Clinical and Translational Science 2020.Entities:
Keywords: Translational life course research; complexity science; cytomegalovirus; life course and lifespan; life course methods; life course research priorities
Year: 2020 PMID: 33948236 PMCID: PMC8057465 DOI: 10.1017/cts.2020.492
Source DB: PubMed Journal: J Clin Transl Sci ISSN: 2059-8661
Core definitions
| Term | Definition |
|---|---|
| Lifespan | A measure of longevity reflecting the underlying chronologic biologic aging of an individual that occurs for everyone. |
| Life course | The interaction of contextual factors over time that affect health and development and varies among individuals. |
| Framework | A conceptualization structure providing guidance to the researcher as research questions are developed and theories are tested. |
| Theoretical framework | A framework that has been derived from tested theories and is generally accepted. The set of concepts drawn from the theory can be thought of as the guide to build and support a study and should define the epistemological, methodological, and analytical approach to the problem. |
| Methodological framework | Methodological guide linking the theoretical framework to the appropriate methodological tools. |
| Complexity science | The study of complex systems and problems that are dynamic, unpredictable, and multidimensional. These patterns emerge over time as a result of systems that are interconnected [ |
Fig. 1.Five recommended pathways for moving transnational life course research forward.
Examples of existing methods that can be used to investigate health across the lifespan using multiple interactions, time as a dimension, system science-based approaches, and computational approaches
| Challenges | Potential Solutions | ||
|---|---|---|---|
|
| – Data storage | Close collaboration with computer science and data science colleagues. | |
| – Fragmented healthcare leads to disparate data types and storage | Development of novel ways to combine historical data that are flexible and scalable. Identify ways to quantify error when combining data from multiple sources. | ||
| – Ethical aspects and need for data oversight committees | Data repositories should have an oversight committee that includes representatives from the community. Data storage and sharing ethics should be a common topic of conversation for staff involved in curating, maintaining, and distributing the data. | ||
| – Data need to be standardized, curated, and high quality | Development of rigorous and sharable standards for standardizing and curating data. Quantify error related a dataset or measure. | ||
| – Data ownership | Recognition that individuals are owners of their data. Outward facing programs that garner community involvement and knowledge about the datasets being used for health research. Discussion about how to give back to the community. Encourage discussion with the community about knowingly contributing data that can lead to medical innovations. | ||
|
| – Scalability | Data science core resources need to be able to serve a broad range of research questions across multiple disciplines. Processes should be developed that are general enough to allow them to be far-reaching. | |
| – Need to have a culture of data sharing | Academic health centers should openly enable, encourage, and reward data sharing. | ||
| – Complying with rules for data access | Create easily accessible documents that clearly state data use and reuse rules. Educate data users, trainees, and investigators on responsible data use and reuse practices. | ||
|
| – Requires transdisciplinary teams that may have differences in communication | Organize workshops, symposia, and meetings that regularly bring together scientists from multiple disciplines. Provide educational opportunities for investigators to learn how to present their research in lay language to make it accessible for all audiences. | |
| – Novel methods may be deemed high risk | Develop NIH initiatives, like Physical Sciences in Oncology (PSON), that bring together scientists from across a wide range of disciplines to address major questions and barriers in research. | ||
| – Models are only as good as the data | Set benchmarks for data processing and transparency high. | ||
|
|
| – Requires training and a shift in epidemiological thinking | Develop easily accessible synchronous and asynchronous learning opportunities for investigators. |
| – Models rest on assumptions about the distribution, sparsity, magnitude of effect, relationship between actors, and others that are carefully considered. | Create minimum standards for publishing complexity and machine learning results in health care journals. Create guidelines for systematic and rigorous research. | ||
| – Algorithms and simulations are not substitutes for formal statistical modeling | Selection of models should be based on the epistemological foundation of the question. Methods should be combined with more traditional models to fully understand problems. Limitations should be clearly stated. | ||
|
| – Institutional specific software and data storage procedures | Develop standardized, extensible data schemas with well-defined entities, attributes, and metadata. | |
| – High variation in data types and structure across institutions. | Document data architecture and create templates to ensure standardized collection of data | ||