| Literature DB >> 31720261 |
Jung Hwan Lee1, Taewoo Kang2, Byung Kwan Choi1, In Ho Han1, Byung Chul Kim1, Jung Hoon Ro3.
Abstract
OBJECTIVE: In general, quadriplegic patients use their voices to call the caregiver. However, severe quadriplegic patients are in a state of tracheostomy, and cannot generate a voice. These patients require other communication tools to call caregivers. Recently, monitoring of eye status using artificial intelligence (AI) has been widely used in various fields. We made eye status monitoring system using deep learning, and developed a communication system for quadriplegic patients can call the caregiver.Entities:
Keywords: Artificial intelligence; Caregiver; Eye; Quadriplegia; Unsupervised machine learning
Year: 2019 PMID: 31720261 PMCID: PMC6826084 DOI: 10.13004/kjnt.2019.15.e17
Source DB: PubMed Journal: Korean J Neurotrauma ISSN: 2234-8999
FIGURE 1Both eyes are captured by the webcam.
FIGURE 2Captured eye image. The size is 28×28 pixels, a total of 784 pixels.
FIGURE 3Software coded using Google Tensorflow. The upper image is the algorithm related to sending and receiving eye images to the artificial intelligence server. The lower image is the algorithm related to generating a sound when the left eye was closed continuously for 3 seconds once the right eye was open.
FIGURE 4Practical applications in a quadriplegic patient. (A) Actual setting for the quadriplegic patient. (B) Window of the software. Click ‘btn 1’ button to recognize both eyes. Click ‘log start’ button to recognize eye status, and call the caregiver. When the left eye was closed for 3 seconds, a sound was generated to call the caregiver. The ‘log end’ button was used to stop distinguishing eye status.