Responding to Chu and Evans (1) in PNAS, Barbier et al. (2) advocate augmenting the human ability to “ingest and analyze” the publications in a field through natural language processing (NLP) categorization of papers (3) and for “open science” to facilitate such NLP use. NLP-based categorization (perhaps augmented with nonlinguistic artificial intelligence/machine learning [AI/ML] algorithms informed by, for example, citation networks and temporal patterns) can streamline navigating scientific literature.The adoption of efficient—reducing effort by an order of magnitude or better—technologies for literature scanning can create some cognitive slack, permitting scholars added time to engage with novel ideas. We argue that fundamental progress slows when scholars are cognitively overwhelmed by the enormous number of papers published each year in large fields (1). AI/ML-based technologies can reduce this cognitive load on scholars by decreasing the time spent delineating topics, categorizing and summarizing papers, and tagging papers for close reading.These technologies must be deployed thoughtfully, however, lest they exacerbate the ossification identified in ref. 1. Categorization in AI/ML applications often depends on prior categorization by humans. Any existing human behaviors—engaging with the literature in routine ways and repeatedly citing the established canon, for example—are likely to be accentuated. AI/ML may direct even more attention to the already well attended.Categorization may not be enough. Barbier et al. (2) suggest that AI/ML be used to recognize “outliers.” But will busy scholars attend to these outliers, or discard them? In very large fields, there are likely to be numerous outliers—including many with little scientific merit. How will scholars know which of these outliers to engage with? Scholars in large fields may be overwhelmed by choice, even when selecting between the subset of papers categorized as outliers. Until AI/ML becomes capable of mining meaning and significance from each paper—moving beyond categorization to scientific understanding and even appreciation—the problem of limited scholarly attention grappling with a rapidly increasing supply of ideas remains.I agree with Barbier et al. (2) that NLP can increase scholarly efficiency (and that open science is necessary to fully realize these benefits of NLP). Efficiency alone, however, does not address the problems reported in ref. 1. The social structure of science must change: Current incentives in academia often push scholars toward exploitation rather than exploration (4); many scientists are likely to take advantage of the efficiency provided by AI/ML to increase their production of canon-based papers rather than investing more time in novel ideas.