| Literature DB >> 25414655 |
John E Hummel1, John Licato2, Selmer Bringsjord3.
Abstract
People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.Entities:
Keywords: LISA; analogy; explanation; logic; modeling
Year: 2014 PMID: 25414655 PMCID: PMC4222223 DOI: 10.3389/fnhum.2014.00867
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Figure 1LISA representation of the proposition . Semantic units (small circles) represent the semantic features of objects and relational roles. Object and role units (large circles and triangles, respectively) represent objects, such as ministers (M) and Coke (C), and relational roles, such as prefer-agent (p1) and preferred-thing (p2), in a localist fashion. Sub-proposition (SP; aka role-binding) units (rectangles) represent bindings of arguments (objects or complete propositions) to relational roles and proposition (P) units (oval) represent complete propositions. When a proposition becomes active (i.e., enters working memory), role-filler bindings are represented by synchrony of firing: Separate role bindings (SPs, object, role and their associated semantic units) fire out of synchrony with one another, and units representing the same role binding fire in synchrony with one another.
Figure 2LISA representation of the cause-effect relation: Entity 1 believes proposition . To represent that believe (e1, p) and believe (e2, p) jointly cause something, the units representing these propositions (left-most ovals) share bi-directional excitatory connections to a unit (left-most diamond) representing a cause group. To represent that agree (e1, e2) is the effect of something, the unit representing that proposition (right-most oval) shares a bi-directional excitatory connection to a unit (right-most diamond) representing an effect group. To represent that the cause on the left is the cause of the effect on the right, the corresponding cause and effect groups share bi-directional excitatory connections with a unit (upper-most diamond) representing a cause-effect (CE) group. Connections between the group units and their respective cause, effect, and CE semantic units are not shown.
Illustration of the retrieve-map-infer cycle that governs explanation-generation in LISA.
| 0 | ||||
| (from | ||||
| ministers → person | ||||
| Coke → product | ||||
| 1 | agree1 → agree1 | |||
| (from | agree2 → agree2 | |||
| ministers → entity1 | ||||
| corporation → entity2 |
Summary of the number (.
| • | |
| • | (5) |
| • | (3) |
| • ( | (5) |
| ( | |
| • | (1) |
| | |
| • ( | (2) |
| | |
| • | (15) |
| ( | |
| ( | |
| • | (2) |
| ( | |
| | |
| • ( | (1) |
| | |
| ( |
Right-facing arrows indicate causal relations. Propositions nested within parentheses act as joint causes.