| Literature DB >> 35359623 |
Kareem Khalifa1, Farhan Islam2, J P Gamboa3, Daniel A Wilkenfeld4, Daniel Kostić5.
Abstract
We provide two programmatic frameworks for integrating philosophical research on understanding with complementary work in computer science, psychology, and neuroscience. First, philosophical theories of understanding have consequences about how agents should reason if they are to understand that can then be evaluated empirically by their concordance with findings in scientific studies of reasoning. Second, these studies use a multitude of explanations, and a philosophical theory of understanding is well suited to integrating these explanations in illuminating ways.Entities:
Keywords: computation; dynamic systems; explanation; integration; mechanism; topology; understanding
Year: 2022 PMID: 35359623 PMCID: PMC8960449 DOI: 10.3389/fnsys.2022.764708
Source DB: PubMed Journal: Front Syst Neurosci ISSN: 1662-5137
FIGURE 1Two ways to integrate philosophical work on understanding with relevant sciences. (A) Naturalized epistemology of understanding. (B) Understanding-based integration.
Kinds of understanding that philosophers infrequently discuss (Khalifa, 2017, p. 2).
| Kind of understanding | Typical complement | Examples |
| Propositional | That + declarative sentence | I understand that you might not enjoy reading this book. |
| Broad linguistic | Name of a language | Schatzi understands German. |
| Narrow linguistic | What + a linguistic expression + means | Schatzi understands what “Ich bin ein Berliner” means. |
| Procedural | How + infinitive | Miles understands how to play trumpet. |
| Non-explanatory interrogative | Embedded question that does not seek an explanation as its answer (most who, where, what, and when questions) | I understand who my friends are. |
FIGURE 2Computational and mechanistic explanations involved in counterfactual reasoning. Mental simulation (gray box) both contributes to the computational explanation of counterfactual reasoning (black box) and is mechanistically explained by the activation of the default network.
FIGURE 3Different inter-explanatory relationships. Letters at the head of an arrow denote phenomena to be explained; those at the tail, factors that do the explaining. Thus, X1 explains X2 and X2 explains Y in (A); X1 and X2 independently explain Y in (B). X1 explains both X2 and Y, and X2 also explains Y in (C); X3 explains both X1 and X2, which in turn each explain Y in (D).
Putatively non-mechanistic explanations discussed by philosophers.
| Explanans | Explanandum | Scientific example | Philosophical work discussing example |
|
| |||
| Difference of Gaussians | Stereoscopic vision | ||
| Exhaustive search | Recall (memory) |
| |
| Gain field encoding | Hand–eye coordination | ||
| Geon composition | Object recognition |
| |
| Hybrid computation | Efficiency of brain |
|
|
| Inhibitory feedback | Normalization |
| |
| Internal integration | Eye movement |
|
|
| Line attractor of choice axis, stimuli’s selection vector | Context-dependent decision making |
|
|
| Mapping non-coplanar points to unique rigid configuration | Three-dimensional visual structure of moving objects |
| |
| Optimization of spatial and spectral information recovery (Gabor function) | V1 receptive fields |
| |
| Similarity of stimulus to stored exemplars | Categorization | ||
|
| |||
| Closeness centrality | Speech and tonal processing |
|
|
| Mean connectivity | Ictogenicity |
|
|
| Motif frequency | Functional connectivity |
| |
| Navigation efficiency, diffusion efficiency | Efficiency of neuronal communication |
|
|
| Network communicability | Cognitive control |
|
|
| Small-worldness | Information propagation |
| Kostić and Khalifa, 2022 (see text footnote 4) |
|
| |||
| Coupling of eye and bodily movements | Onset of motor control | ||
| Coupling ratio | Bimanual coordination (relative phase) |
| |
| Strength of memory trace, salience of target, waiting time, stance | Infant reaching (A-not-B error) |
| |
| Potassium and sodium ion flows | Neural excitability | ||
The explanans (first column) is the factor that explains. The explanandum (second column) is the phenomenon to be explained. An asterisk indicates that the author takes the explanation to be mechanistic.