| Literature DB >> 18958272 |
Philip H Goodman1, Sermsak Buntha, Quan Zou, Sergiu-Mihai Dascalu.
Abstract
Traditional research in artificial intelligence and machine learning has viewed the brain as a specially adapted information-processing system. More recently the field of social robotics has been advanced to capture the important dynamics of human cognition and interaction. An overarching societal goal of this research is to incorporate the resultant knowledge about intelligence into technology for prosthetic, assistive, security, and decision support applications. However, despite many decades of investment in learning and classification systems, this paradigm has yet to yield truly "intelligent" systems. For this reason, many investigators are now attempting to incorporate more realistic neuromorphic properties into machine learning systems, encouraged by over two decades of neuroscience research that has provided parameters that characterize the brain's interdependent genomic, proteomic, metabolomic, anatomic, and electrophysiological networks. Given the complexity of neural systems, developing tenable models to capture the essence of natural intelligence for real-time application requires that we discriminate features underlying information processing and intrinsic motivation from those reflecting biological constraints (such as maintaining structural integrity and transporting metabolic products). We propose herein a conceptual framework and an iterative method of virtual neurorobotics (VNR) intended to rapidly forward-engineer and test progressively more complex putative neuromorphic brain prototypes for their ability to support intrinsically intelligent, intentional interaction with humans. The VNR system is based on the viewpoint that a truly intelligent system must be driven by emotion rather than programmed tasking, incorporating intrinsic motivation and intentionality. We report pilot results of a closed-loop, real-time interactive VNR system with a spiking neural brain, and provide a video demonstration as online supplemental material.Entities:
Keywords: artificial intelligence; epigenetic robotics; human robot interface; mesocircuit; neocortex; neurorobotic architecture; reinforcement learning; social robotics; virtual reality
Year: 2007 PMID: 18958272 PMCID: PMC2533586 DOI: 10.3389/neuro.12.001.2007
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Components of a closed-loop neuromorphic brain development.
Computation and communication provides nearly real-time robotic response Repertoire of ROBOT behavior is commensurate with its physical and brain complexity Time is ACTOR must assume ROBOT has ability to perceive and respond meaningfully ACTOR must respond as he/she would in similar real world circumstances |
Realistic contents, including sights, sounds, obstacles Movement of background objects apropos to scenario May be altered or affected by the actions of the ROBOT or ACTOR May include other ROBOTs or multiple ACTORs |
Human, child or adult, depending on type of intelligence targeted Suitability or willingness to attribute intentionality to the ROBOT |
Central nervous subsystem (BRAIN)
Neocortex Hippocampus Basal ganglia Other limbic regions relating to attention, reward, and fear Biologically plausible learning algorithms Progressively more complex architecture as constrained by VNR Interpretive (rule-based) & communication subsystems (BRAINSTEM) Embodiment in virtual VNR architecture Internal (virtual) sensory capabilities (proprioception, balance) External sensory capabilities (vision, hearing, touch) Facial and body social expressive and gestural capabilities Dexterous movements (upper extremities for humanoid robots) Translational movements (lower extremities) |
The system is comprised of functional SCRIPTing requirements with three main components: SCENE, ACTOR, and ROBOT. Terms in parentheses are used in the text and Figures.
Figure 1Schematic cartoon of a fully implemented virtual neurorobotic (VNR) system. VNR substitutes a pseudo-3D screen projection for the physical robot, which participates in real-time interplay with the human actor. The robot's eyes (pan-tilt-zoom camera) and ears (monaural or spaced stereo microphones) capture the actor's movements and voice in the context of the background scene, which is projected independently (and may contain moving elements, including other animals or actors). The BRAINSTEM is a multiprocessor computer (running threads) that synchronously (1) captures and preprocesses video images, sound, and touch, (2) converts preprocessed sensory images into probabilities of spiking for each primary neocortical region, (3) uploads the spike probability vectors to the BRAIN simulator, (4) accepts, from the BRAIN simulator motor neuron region, output spike density vectors and triggers corresponding dominant motor sequences (e.g., sitting, lying, barking, walking) via the robotic simulator program (Webots/URBI), which makes the corresponding changes in behavior of the projected robot (and incorporates internal sensation such as proprioception and balance). The BRAIN simulator is a neuromorphic modeling program running on a supercomputer, executing a pre-specified spiking brain architecture, which can adapt as a result of learning (using reward stimuli offered by the ACTOR's voice or stroking of the touch pad). Based on successful performance, researchers iteratively “plug in” alternative or more complex brain architectures.
Figure 2Multi-threaded pipeline organization of the BRAINSTEM. Each sensory or report modality is assigned its own thread in a self-blocking queue. Data are read from a hard drive shared with sensory capture software. Outputs back to the robotic system and to the NCS brain simulator are sent by TCP/IP port routing. Documentation is available at http://brain.unr.edu.
Figure 3ACTOR-BRAIN-ROBOT interplay. Ten-second behavior scenario indicating timing of ACTOR (upper row) and ROBOT (lower row) events. The ACTOR in this scenario was free to choose any sequence of movements in response to perceived intent of the ROBOT. ROBOT behavioral sequences are triggered when the neuromorphic BRAIN output to BRAINSTEM has 50 ms of consistent spiking in one pre-motor region compared with another. Periods without domination of one pre-motor region over another trigger the ROBOT to lie down and growl. In cell rasters, each row represents the timing of action potentials (spikes) of a single neuron; darker gray markers indicate clustered bursts of spikes.
Figure 4Demonstration of VNR interaction. The system is comprised of three main components, SCENE, ACTOR, and ROBOT, as viewed by an external observer. () Three major behaviors of the ROBOT. () Positioning of ROBOT's external sensory devices. () Background scene consisting of suburban neighborhood. () VNR loop (ROBOT-eye view superimposed in lower left corner): () ACTOR approaches with threatening behavior (bat) and ROBOT responds by sitting up and barking. () ACTOR responds by lowering bat and squatting down, then ROBOT responds by lying and growling a warning. () ACTOR offers dog bone and ROBOT responds by standing and wagging tail. Online video, http://brain.unr.edu/VNR/VNRdemo.avi.