INTRODUCTION: Smart homes for assistance help compensate cognitive deficits, thus favoring aging in place. However, to be effective, the assistance must be adapted to the abilities, deficits, and habits of the person. Beside the elder, caregivers are the ones who know the person's needs best. This article presents a Do-it-Yourself approach for helping caregivers designing a smart home for assistance. METHODS: A co-construction process between a caregiver and a virtual adviser was designed. The knowledge of the virtual adviser about smart homes, activities of daily living and assistance is organized in an ontology. The caregiver interacts with the virtual adviser in augmented reality to describe the home and the resident's habits inside it. The process is illustrated with an ordinary activity: 'Drink water'. RESULTS: The proposed process highlights two main steps: describing the environment and determining the resident's habits and the assistance required to improve activity performance. Visual guidance and feedback are provided to ease the process. CONCLUSION: Designing a co-construction process with a virtual adviser allows interactive knowledge sharing with the caregivers who are experts of the person's needs. Future work should focus on evaluating the prototype presented and providing deeper advice such as highlighting incomplete or incorrect scenarios, or navigation aid.
INTRODUCTION: Smart homes for assistance help compensate cognitive deficits, thus favoring aging in place. However, to be effective, the assistance must be adapted to the abilities, deficits, and habits of the person. Beside the elder, caregivers are the ones who know the person's needs best. This article presents a Do-it-Yourself approach for helping caregivers designing a smart home for assistance. METHODS: A co-construction process between a caregiver and a virtual adviser was designed. The knowledge of the virtual adviser about smart homes, activities of daily living and assistance is organized in an ontology. The caregiver interacts with the virtual adviser in augmented reality to describe the home and the resident's habits inside it. The process is illustrated with an ordinary activity: 'Drink water'. RESULTS: The proposed process highlights two main steps: describing the environment and determining the resident's habits and the assistance required to improve activity performance. Visual guidance and feedback are provided to ease the process. CONCLUSION: Designing a co-construction process with a virtual adviser allows interactive knowledge sharing with the caregivers who are experts of the person's needs. Future work should focus on evaluating the prototype presented and providing deeper advice such as highlighting incomplete or incorrect scenarios, or navigation aid.
With the world population aging, the number of people carrying cognitive impairments
tends to grow.[1],[2] Such cognitive deficits, especially memory, planning and attention deficits,
can lead to a loss of autonomy.[3]The activity theory of aging[4] proposes that older adults age optimally when they stay active and maintain
social interactions. However, it is necessary to compensate cognitive deficits to
favor aging in place. With Ambient Assisted Living (AAL) approaches, it is possible
to define smart environments considering the requirements of people with
disabilities and thus counter this loss of autonomy,[5-8] especially for improving
performance in Activities of Daily Living (ADLs).[9],[10]An AAL system detects behaviors and provides visual or oral cues in real-time to help
compensate the disabilities. For example, the activation of a light path indicates
where to go to people suffering from night wandering.[9],[10] Thanks to a pressure sensor under the mattress, the system detects the person
leaving the bed, identifies the nighttime andering scenario involved, and provides
the adequate assistance in this case, powering the light path to prevent spatial
disorientation, anxiety and falls.As people are more likely to follow routines with age,[11] assisting habits, by determining pre-programmed scenarios, proves to be an
effective solution for elders.[10] Therefore, AAL systems must get an internal representation of the home and
the actions performed to ensure people behave adequately.The choice of this internal representation is constrained by the desire to allow any
caregiver to design the AAL system for assistance. As caregivers and, even more,
residents are experts of their daily living, a Do-it-Yourself (DiY) approach seems
the more appropriate, as suggested by De Roeck et al.[12]: “In order for the IoT to really take off, end-users need to participate in
the creation process on a larger scale. They need to have the power and control over
the creation and use of applications for smart environments.”Since the 1990s, DiY communities are invited by human computer interface communities
to explore user-friendly interactions.[13],[14] Indeed, to design a smart home suiting the resident’s habits, the caregivers
need to easily interact with an expert-system which abstracts the technical
complexity underlying.Regarding smart homes, several approaches tackles the assistance. Artificial
Intelligence (AI)-driven approaches such as neural networks extract habits from
patterns in the daily activities of the resident.[15],[16] However, such approaches do not describe the decisions explicitly, making it
hard to customize the assistance. We adopt a symbolic approach,[17] by mapping real-world entities to digital structures in a language that stays
readable by humans.Such symbolic scenario-based approaches are proven to be beneficial[18],[19] by providing a unified interface for the resident and the various
stakeholders that are involved in the design process (occupational therapists,
caregivers, etc.).[18]Regarding user-friendly interactions, Augmented Reality (AR) tends nowadays to be
used in various domains, such as entertainment,[20] medical training[21] or assistance.[22] It shows great usability properties as it superimposes additional information
to the real world. For instance, during the broadcast of a football match, AR is
used to superimpose, in real-time and on the field, the attacking team distance to
gain, or the trajectory of individual players during a replay. Furthermore, through
headsets such as the Microsoft Hololens or Magic Leap One, or through smartphones,
mobile AR is becoming more and more accessible, offering users the freedom of
movement.We propose a Virtual Adviser (VA) that allow the caregivers to digitalize (i.e.
convert information in the real world into computer-readable information) by
themselves, the habits of the resident inside her environment, and specify the
scenarios of assistance that will augment this environment. AR supports the merging
of both the real environment of the resident and the virtual world of the smart home
data. The digital scenarios created with the VA could then feed an AAL system,
offering an assistance tailored to the resident’s needs and habits.This paper presents the design of a smart home for assistance when accompanied by a
VA. The VA helps caregivers virtually map the habits of the resident to her
environment; specifying the ideal scenario; and determine how sensors will detect
the concerned ADL when processing the assistance inside the smart environment.The next section presents several Related Works to help situate our work within the
existing literature. We then discuss how Assistance Inside a Smart Home for
Assistance works before describing our Do-it-Yourself Approach and the resulting
Design Process for such assistance. The specific Implementation of our VA is later
discussed before presenting how it applies to a Use-Case Example. Finally, we
discuss the Limitations of the current work before presenting our future visions in
the Conclusion.
Related works
Few works tackle the use of AR to support people with cognitive difficulties,
especially in a smart home environment or taking into account the caregiver. In
Hayhurst’s 2018 review of virtual reality and AR works to support person living with Dementia,[23] only five such works are presented whereas D’Cunha’s 2019 non-systematic minireview[24] only cites one article when considering participants with impairment or
dementia.Among them, Scavo et al.’s[25] GhostHands allow mentors to remotely control virtual hands to provide
instructions to distant workers. GhostHands authors discuss that “users generally
perceived the experience as greatly stimulating and with a strong sense of
connectedness and playfulness, hence improving engagement”. Similarly, caregivers
could accompany the residents during their daily tasks by showing the right
movements or by pointing items to improve their engagement. However, such solution
would imply that a caregiver is reachable at any time and that the resident wears an
AR display, making it invasive on both ends and unrealistic considering our goal. We
suggest that building a representation of the tasks and the instructions, in the
form of the assistance, can help the resident without needing the intervention of a
caregiver in real-time.On the other hand, MemHolo[26] proposes to alleviate the deficits through cognitive training exercises such
as finding pairs of identical objects, with a Microsoft Hololens. The exploratory
studies pursued in MemHolo show positive acceptance of AR technology for persons
with mild Alzheimer’s Disease. Despite not being focused on assistance in smart
homes, the article provides useful design hints and uses the same AR devices as our
VA.Closer to our work, Memory Palace[27] encourages caregivers to attach media cues, called ‘memories’, to items of
significance in the home environment. When the resident passes nearby with a phone
and app, those memories are played back. Morel et al.’s conclusion focuses mainly on
the resident’s application and less on the caregiver side. Nonetheless, the authors
highlight that despite having some difficulty to interact with the application, the
elderly people liked the personalized aspect and are open to learn new things. We
follow the same idea of augmenting the home environment and personalizing the
experience. Yet, where Memory Palace relies solely on interviews to build
unstructured cues, we pursue a semantic approach to provide context to the cue and
automation through the smart home, removing the need to carry a phone.Finally, cARe[28] is a framework designed to be easily adapted by caregivers to various
use-cases without programming knowledge. A desktop application allows caregivers to
create instructions with associated media files, later placed within the resident’s
environment using a Microsoft Hololens headset and a Unity3D application. A Unity3D
patient application for the Hololens headset provides guidance to the next
instruction and displays its details while playing the media attached. Once more,
Wolf et al. findings are focused on the tests performed with patients and less about
the caregiver experience. Individualized assistance is once again highlighted as
well as the possible discomfort of wearing a headset. The caregiver applications
from cARe and Memory Palace are quite similar as they provide the same possibility
to place instructions or cues inside the environment. Thus, the same comparison
applies and we provide more contextual information through a semantic approach. As
for the discomfort, we suggest that AR headsets evolution could alleviate it.
Moreover, this finding emphasizes the need of user testing, one of the limitations
of our paper, and the interest in porting our VA to smartphones.Overall, our VA is unlike GhostHands,[25] MemHolo[26] and Memory Palace[27] as it is not designed to be used daily by the resident but sporadicaly by the
caregiver. On the opposite, our VA is quite similar to cARe[28]’s Unity3D caregiver application as a tool to digitalize the activities of the
resident. However, although the instructions are placed in the environment using AR,
cARe still relies on a desktop application to create the instructions in the first
place. We differentiate by relying only on AR interactions and by adding AI advices
to better digitalize the resident’s needs.Finally, our assistance would later be handled by the smart home AI in a distributed
cognition approach as described in Kognit[29] and not by a mentor as proposed in GhostHands.[25]Before diving into how we approach the Design Process of the assistance, we describe
how the Assistance Inside a Smart Home for Assistance works in the next section.
Assistance inside a smart home for assistance
In order to provide assistance in the daily living, a Smart Home for Assistance (SHA)
relies on a network of sensors, to detect the activity of the resident; and
actuators, to provide real-time feedback through visual or oral cues or by modifying
the environment. For instance, a warning cue can be played when the stove is left
open, or a closet can open automatically when the resident needs to take a glass
inside it.As sensors provide low-level signals such as binary or numeric values, data have to
be interpreted, sometimes combined, to determine higher-level actions, for instance
through pattern recognition[15],[16] or semantic rules.[8] The change in value of a contact sensor can be translated into the action of
opening or closing a closet while the value of a pressure sensor can be compared to
a threshold to determine if someone is lying on a bed.Actions are then sent to a reasoner, the AI core of the SHA, that will compute the
assistance. This reasoner holds a state of the environment, for example the date and
time, sensor values, or previous actions.As we follow a scenario-based approach[17] to determine the adequate assistance, our reasoner also holds a set of
scenarios of assistance composed of a known course of action and the adequate
assistance actions for each step.The goal of this work is to provide a tool allowing a caregiver to build such
scenarios.By comparing the state of the environment with the scenarios of assistance, the SHA
can determine which scenario(s) is(are) actually followed by the resident and
its(their) advancement. For instance, if the last action of the resident was to ‘lie
down on the bed’ of her bedroom, scenarios involving ‘using the oven’ in the kitchen
can be discarded. On the opposite, scenarios such as ‘afternoon nap’ or ‘night
sleep’ might be valid. Finally, considering that this example takes place at 4PM,
the reasoner excludes the ‘night sleep’ scenario in favor of the ‘afternoon nap’
scenario.Once the current scenario and its advancement is known, the assistance associated to
the current action can be provided to the resident. To do so, high-level assistance
action are translated back to low-level signals and sent to the actuators. For
instance, as the assistance action ‘turning off the light’ is associated to the
action ‘lie down on the bed’ from the ‘afternoon nap’ scenario, the reasoner sends
an off signal to the lights of the bedroom.
Do-it-yourself approach
As explained previously, SHAs provide a way to help a resident stuck in an ADL
completion. Indeed, as diseases and aging impact abilities to stay autonomous at
home, it might become difficult to complete ADLs, either forgetting items,
processing inappropriately some steps or organizing badly the overall activity.SHAs answer this problem by displaying cues to help the resident follow the usual
course of actions of her ADL. But, the design of such a smart home rises challenges
as every house is different, every resident is specific and behaves in her own way.
Adapting the assistance to the home, the resident and her habits is then mandatory,
hence justifying a DiY approach.
Actors
According to a DiY approach, family members or caregivers are privileged actors
to design the assistance in a smart home, as they know well the habits of the
resident. However, it is necessary to help them designing the assistance the
resident needs. Therefore, appears a new actor, the VA, which helps the
caregiver designing the smart home for the resident (Figure 1).
Figure 1.
A non-autonomous resident in her static home (a) becomes autonomous
inside her smart home (c) designed by a caregiver helped by a virtual
adviser (b).
A non-autonomous resident in her static home (a) becomes autonomous
inside her smart home (c) designed by a caregiver helped by a virtual
adviser (b).We define the roles of the various actors involved in the assistance as resident,
designer, VA, static home and smart home:The resident is a person who experiences difficulties to
complete some ADL at home. Issues are caused by cognitive or
perceptual deficits, either due to neurological diseases or normal
aging. Instead of performing the right scenario, the resident fails
in selecting the appropriate object, in using it properly, in
orientating in her home or in carrying out the activity adequately
at the right time and in the right place.The designer aims to define the assistance that will
alleviate the autonomy of the resident. She could be any caregiver,
member of the family, of the neighborhood or even of the medical
staff. She is characterized by her knowledge of the resident’s needs
but may have low knowledge of the smart home technology. To
determine the appropriate assistance for a given ADL, the designer
describes how this ADL is carried out by the resident in her
home.The Virtual Advisor (VA) is a technological help that
assists the designer. It is characterized by its knowledge of the
smart home technology but has no prior knowledge of the resident nor
her living environment. The VA can understand the spatial geometry
and can be interacted with. It also has expertise on successful
execution of activities.The static home is the usual home of the resident,
considering no smart features are already installed.The Smart Home for Assistance (SHA) is a smart home able
to track the resident’s activity with sensors; and provide some
environmental cues with actuators.
Scenarization of the daily living
As the backbone of the assistance, a Scenario of Activity of Daily Living (SADL)
is defined as a sequence of steps that must be performed in a specific order, at
the right place during an adequate time and involving adequate objects.We then distinguish the resident performance, the ideal scenario and the
operationalized scenario:The resident performance describes how the resident is performing the
SADL.The ideal scenario is the SADL that best fits the ADL that the
resident is performing. The ideal scenario may include
alternatives.The operationalized scenario is the application of the ideal scenario
at the resident’s home according to her habits. It includes the
sensors to detect the right and wrong actions of the resident; and
the actuators to provide assistance.Following those definitions, the SHA detects the resident performance in order to
help her achieving the ideal scenario. Based on the theory of instrumented
activity, the operationalized scenario empowers the resident by offering
appropriate cues when she fails during the activity realization.[30]The purpose of introducing a DiY approach to design the SHA is to offer any
designer, whatever her technical knowledge, ways to describe the resident’s
needs and the operationalized scenario to foster the resident’s autonomy. Any
designer is supported during the design process with knowledge available in a
user-friendly form, thanks to a VA. Designing the assistance becomes a
co-construction between the designer, who is expert in the resident’s habits,
and the VA, who is expert in ADL scenarios and assistance needed to achieve
them.The next sections covers the co-construction design between the designer and the
VA, regarding the knowledge they share. First, we present how the physical
environment of the resident is described and second how the ADL scenarios are
introduced in the resident’s virtual environment.
Design process
Semantic representation of the environment
ADLs are situated by definition: they involve specific elements at specific
places and times. During her ADLs, a resident performs actions with furniture or
appliances which are placed inside specific rooms of the home. Those objects and
rooms are crucial for recognizing the resident’s performance and assist her, if
necessary. The physical environment is therefore composed of the rooms,
furniture and objects that will be involved in the SADL. For instance, in case
of hygiene assistance, the environmental description may include bathroom, taps,
toilet as well as toothbrush.Thus, the first step for defining the operationalized scenario is to describe the
environment where it will take place.The physical environment description emerges from both the designer and the VA.
Indeed, the VA can understand the geometry of the surroundings, but only the
designer knows the semantic underlying the environment.For example, when the designer looks around with an AR headset, the VA can
determine the floor and the walls as physical obstacles. However, the designer
is the one to explain the usage of the rooms, such as the kitchen or the
corridor, and the physical objects, such as the oven or the table.The result of this step is a digitalized description of the resident’s home where
each room and each relevant furniture and objects are stored in a computerized
plan. This plan comprises geometrical information linked to semantical
information, in order to explicit the operationalized scenario according to
spatial information and meaning.Shared Representation of the Physical Environment This
co-construction process leads to the ensuing model being used by the VA:Eventually, implicit walls are added. Implicit walls allow rooms to
be separated according to their usage even if there is no physical separation
between them. For instance, an open kitchen is separated from a dining room or a
living room based on its usage but no concrete wall is present to do the
separation. Thus, this separation is only present in the resident’s semantic of
the space, what could not have been detected by the VA.A corner is defined as a tridimensional point
(x,y,z) in the space;A wall holds, at least, two corners and a depth;A room is made of multiple walls as well as a type (kitchen,
living room, etc.);A furniture or an appliance has, among other attributes,
three dimensions (width,height,depth) as well as a type
(table, light, fan, etc.).In our model, implicit walls are walls with the attribute ‘implicit’ set to
true and ‘depth’ to 0. An implicit wall is identified by
the two corners its extremities. As the wall have no collision part between the
ground and the ceiling, those corners are placed either on the ground or the
ceiling.In the end, each physical element is linked to a virtual element in the knowledge
base resulting in a fully digitalized apartment (Figure 2) that allows the VA to
superimpose some additional information when needed.
Figure 2.
Overview of the DOMUS lab apartment fully mapped. Furniture and
appliances are displayed in blue. Virtual walls are displayed in
green.
Overview of the DOMUS lab apartment fully mapped. Furniture and
appliances are displayed in blue. Virtual walls are displayed in
green.Digitalization Process To digitalize the resident’s home in AR, the
designer follows the steps below:Digitalize the walls corner by corner. Once a space is enclosed by
walls, the designer specifies the type of the room delimited.Identify the objects (appliances, furniture, lights, etc.) by placing
a bounding container (box, sphere, etc.) around them.Specify the type of the element from a contextual list depending on
the actual room type.With the VA, building a wall is as simple as selecting its extremities. A wall is
then filled from the floor to the ceiling using the environment geometry (Figure 3(a)).
Figure 3.
Mapping walls and specifying the type of a room.
Mapping walls and specifying the type of a room.Each time a wall is digitalized, the VA performs a check to find if it closes an
area. If so, the semantic of the delimited room is asked through a list of room
types, from which the designer chooses one option (Figure 3(b)).By adding the semantic of rooms to the virtual environment, the VA detects the
missing furniture or appliances that commonly appear in this given room.
Creating a room thus triggers a search inside the VA knowledge of the more
common items inside it. For instance, if no sink is specified inside a kitchen,
the VA will suggest digitalizing it as it is a common furniture.Furniture or appliances are identified by creating bounding containers around
them, then by selecting their type (Figure 4(a)). Moreover, such elements
hold a reference of the supports they stay on (floor, ceiling, wall, furniture
or appliance), themselves linked to the room where they belong.
Figure 4.
A virtual bed and navigational assistance to get to it.
A virtual bed and navigational assistance to get to it.In addition of the functionalities presented previously, the designer may, at any
time, ask, using natural language, where some furniture is. The VA then searches
for the corresponding furniture and shows the path to follow. This provides
easier navigation (Figure
4(b)) for people who are not familiar with the environment as well as
a way to verify the elements mapped previously.
Semantic representation of the habits
Once the environment is digitalized, the designer must digitalize the resident’s
habits.The habits knowledge comes both from the designer and from the VA. Indeed, as a
caregiver, the designer is an expert of the resident’s habits. On the other
side, the VA is the expert in SHA and IoT network. It is, thus, the best actor
to translate habits into sensors and actuators that enable the SHA to detect and
assist the resident.Shared Representation of the Habits: For both actors to be able to
understand each other, a common model has to be specified at a level of
abstraction suitable for the designer.Most often, people express ADLs according to a hierarchical view, where
activities are subdivided into subtasks and atomic tasks (i.e. tasks that cannot
be decomposed) called actions.[19]A natural level of abstraction is to talk about ADLs in term of a tree of tasks
produced by the resident to answer a specific goal.[19] This transfers the burden of handling complex low-level concepts such as
devices and signals to the VA.In such tree, tasks can be optional or can even repeat. For instance, when
drinking water, one resident could drink three glasses of water while another
one could drink only one, meaning the task of serving water could repeat as much
as needed.Formally, SADLs are defined as trees with “a goal (root), several tasks (internal
nodes) and actions (leaves)”[19] such as:The goal is reached once all its mandatory subtasks are accomplished;Tasks and subtasks represent the steps involved in reaching the goal. Subtasks
can have preconditions that should be met for the task to happen,
post-conditions that are set once the task completes, and operators defining
their order, repetition or importance (i.e. optional vs mandatory);Actions are atomic tasks that can be translated into sensor and effector signals.
They are either sensed property of the environment or cues targeted at the
resident, and are associated to an element of the environment, e.g. the whole
house, a room, or a furniture.A generic SADL is depicted in Figure 5.
Figure 5.
Overview of a generic scenario of an activity of daily living with a goal
decomposed into several sub-tasks and actions, ordered by operators.
Overview of a generic scenario of an activity of daily living with a goal
decomposed into several sub-tasks and actions, ordered by operators.In order to be able to associate actions to elements of the environment, each
furniture is linked to several actions that have been specified beforehand into
the VA knowledge base. For example, the action ‘Lie Down’ is
associated to the ‘Bed’ type so it can be performed only on
elements defined as beds. The VA knowledge also includes which sensors can
detect this action or which actuator can produce it.The ideal scenario is thus digitalized in an operationalized scenario, making
explicit the order of actions and the assistance specified by the designer.Digitalization Process: By gazing at an element of the environment,
the designer sees, juxtaposed to it, all the actions that can be performed with
it or the scenarios that involve it (Figure 6). She then selects an action or
a previously defined scenario to append to the SADL. Actions are grouped by
categories if there are too many of them to display at the same time.
Figure 6.
The virtual adviser proposes two actions linked to a bed: ‘Get up' and
‘Lie down'.
The virtual adviser proposes two actions linked to a bed: ‘Get up' and
‘Lie down'.While the scenario is built, the VA displays an overview of the steps (Figure 7). Editing the
scenario is then possible by expanding the timeline to its full view and
selecting a step to edit, erase, or append. The designer can also specify task
operators such as ‘Optional’ or ‘Repeat’. This
workflow allows easy iteration, as scenarios can be opened and edited whenever
needed.
Figure 7.
Simplified timeline of the steps of a scenario. The smaller blue dots on
the timeline illustrate the movement tasks computed by the virtual
adviser.
Simplified timeline of the steps of a scenario. The smaller blue dots on
the timeline illustrate the movement tasks computed by the virtual
adviser.Besides the timeline, the VA builds a spatial map of the designer’s movements.
Instead of displaying the back and forth between rooms, the VA computes the
ideal path, trimming irrelevant path during trials.Having the VA compute this path avoids having to specify explicitly the moves.
The movement steps are added in the timeline between other steps automatically
(Figure 7).To promote the co-design between the designer and the VA, all computed
information is displayed in real-time. For instance, the path recorded by the
system is displayed as footprints on the ground (Figure 8). Moreover, computing the
timeline and paths ensure the consistency of the SADL.
Figure 8.
Footprints on the ground indicate the path recorded by the virtual
adviser after back and forth moves have been trimmed.
Footprints on the ground indicate the path recorded by the virtual
adviser after back and forth moves have been trimmed.
Implementation
The VA is composed of two applications: the AR application the designer interacts
with and the ontology that provides the knowledge needed to build the
assistance.
AR application
The AR application is installed on a Microsoft Hololens or Hololens 2 AR headset
(see https://www.microsoft.com/en-us/hololens) that allows to
superimpose virtual knowledge on the environment seen by the designer. The
headset uses a see-through display composed of two lenses on which is projected
the virtual image.No other headset or AR devices are currently supported but efforts are being made
to allow the AR application to run on devices such as the Magic Leap One headset
or even smartphones. We use multi-platforms libraries and implement abstraction
layers and chains of responsibilities in that respect.Interacting in AR: The interaction in AR is mainly composed of two actions:
gazing and selecting. The designer watches
her environment through the headset that builds a mesh on every physical objects
(Figure 9).
Figure 9.
Mesh built by a Microsoft Hololens of a living room with a sofa and a low
table.
Mesh built by a Microsoft Hololens of a living room with a sofa and a low
table.The headset follows the gaze of the designer to determine the commands that the
designer can use depending on the context. To send commands, the designer must
select choices proposed by the VA in AR. For instance, when a room is
digitalized, the AR application proposes to choose the type of the room amongst
a list. To select an option, the designer may speak aloud or pick an option by
pinching its label.
Technical details
Architecture: The AR application is based on the concept
of scenes, managers and
controllers which define what elements are visible at a
given time, how they are retrieved or persisted and how the designer can
interact with them.The AR application is being developed in C# 4.7 using the Unity
2019.2 engine.The user interface uses the Microsoft Reality Toolkit (MRTK)
2.1.0 framework to provide the base interactions and a
consistent design for the Hololens as well as standard UnityAR
devices such as smartphones.Asynchronous communication with the ontology is ensured using
async REST calls to the ontology API thanks
to the AsyncAwaitSupport assets. All data
exchanged with the ontology is parse from JSON to C# model
classes and vice versa using the Newtonsoft.JSON 12.0.2 API.For instance, the MainMenu scene relies on the
MainMenuController to switch between the available
scenes while the Plan scene relies simultaneously on
multiple controllers such as the WallsController or the
RoomsController to allow the designer to respectively
build walls and identify rooms. Finally, the Scenario scene
enables the ElementsController to identify furnitures or
appliances, the ActionsController to display the available
actions for the elements in the field of view and the
ScenariosController to organize the actions in
SADLs.Inside the Plan scene (Figure 10), the
WallsController handles the corners and walls while
RoomsController handles the rooms. Each controller
retrieves the exiting entities from a manager that
communicates with the ontology through a REST client. Those
managers are also responsible for persisting the
entities created by the controllers into the ontology.
Figure 10.
Software architecture of the AR application. Scenes contains
controllers that determine which interaction and display is active.
Controllers use managers to retrieve and persist entities into the
ontology through the REST client.
Software architecture of the AR application. Scenes contains
controllers that determine which interaction and display is active.
Controllers use managers to retrieve and persist entities into the
ontology through the REST client.For the interactions, WallsController listens to clicks on
the Hololens mesh to create new corners and to clicks on corners to build
the walls that separate them. On the other hand,
RoomsController waits until walls are created to search
for newly closed rooms that should be identified.OntologyThe VA is connected to an OWL (see https://www.w3.org/TR/owl-features/) ontology through an API
endpoint. When the designer interacts with the AR application, the AR
application updates its internal representation of the environment and the
resident’s habits by taking concepts from this domain ontology and applying
them to the current situation.The OWL-DL ontology is the outcome of several years of work of a
multi-disciplinary team of researchers, students and occupational therapists
at the DOMUS Laboratory[8],[9],[32] and is still being extended at the time of writing.This knowledge base is being built using Protégé 4 and the dul(http://www.ontologydesignpatterns.org/ont/dul/DUL.owl)
ontology as a baseline. It integrates multiple aspects such as homes, tasks,
assistance, devices, persons and activies as depicted in Figure 11.
Figure 11.
Overview of the DOMUS ontology.[31] Relation names have been omitted for readability reasons.
Overview of the DOMUS ontology.[31] Relation names have been omitted for readability reasons.Home concepts such as corners,
walls, rooms and
elements are primarily used by the AR assistant to
digitalize the resident’s environment.Task concepts such as tasks,
scenarios (scenarios are actually tasks),
actions, conditions or
operators are used by the AR assistant to digitalize
the resident’s habits.Assistance concepts such as assistance,
audio, light or
commands are linked to scenario actions and used by the
SHA to adapt the assistance to the current situation. For instance, if the
resident leaves the oven on for too long without cooking, the SHA will first
ask the resident to stop the oven. To do so, the SHA has to determine the
current location of the resident and then produce the prompting in her room,
thus searching inside its Home and
Assistance knowledge to produce the adequate
command. If the resident is not available or if she
fails to correct the situation, the SHA can then produce a
command to shut the oven down.Device concepts such as devices, sensors, values or
controllers are linked to scenario actions and used by the SHA to determine
the current state of the environment, extrapolate the actions performed from
the sensor values and translate the Assistance commands
into adequate signals to the actuators.Finally, Activity and Person concepts are
used to tag the type of tasks that are performed by the resident and to
specify the user profile with her own
preferences.
Use-case example
Dehydration is a well-known symptom experienced by people with dementia or elderly people.[32] Encouraging elderly people to drink more often is then important. On the
other hand, people with Alzheimer may present nighttime wandering that could be
assisted by a SHA to ensure they go back to bed.[9] During night, the resident may desire to drink water and the SHA should
assist her to go to the kitchen and satisfy her need.Both the previous situation of dehydration and nighttime wandering share a common
scenario: ‘Drink water’.[31] The inclusion of this scenario in a night-time wandering context is
illustrated in Figure
12.
Figure 12.
Overview of the drink water scenario.[31] The goals (‘Drink water') is decomposed into several subtasks
(squared, e.g. ‘Take a glass from the cabinet’) and actions (rounded, e.g.
‘Open the cabinet’). The grayed action (‘Turn the cabinet light on’) is an
example of environmental cue that help achieving the ADL.
Overview of the drink water scenario.[31] The goals (‘Drink water') is decomposed into several subtasks
(squared, e.g. ‘Take a glass from the cabinet’) and actions (rounded, e.g.
‘Open the cabinet’). The grayed action (‘Turn the cabinet light on’) is an
example of environmental cue that help achieving the ADL.In this scenario, the goal is to ‘Drink water’. Several tasks should
happen, starting with the resident leaving the bed, then performing some tasks in
the kitchen and ending by going back to bed. Tasks such as ‘Leave the
bed’ or ‘Get back to bed’ are actions (atomic tasks)
that can be gathered by sensors, for example by a pressure sensor under the
mattress.To implement the scenario, the designer first digitalizes the bedroom, the bathroom
and the kitchen. She determines the relevant objects, such as bed, taps, cabinet and
glasses. On its side, the VA prepares the next step by gathering the actions
associated to those objects, such as ‘Get up’ and ‘Lie
down’ associated to the bed or ‘Open’ associated to
the cabinet.After the physical environment is digitalized, the designer goes through the home to
specify the ‘Drink water’ scenario. She first goes to the bedroom
to indicate that the scenario begins when the resident rises from her bed. She gazes
at the bed and selects the action ‘Get up’ proposed by the VA, then
moves to the kitchen.At this time, the designer recognizes that complex scenarios like ‘Drink
water’ can be decomposed into smaller ones, such as the ‘Take a
glass’ and ‘Fill the glass’ scenarios. She tells the
VA that she wants to create a new (sub-)scenario: ‘Take a glass from the
cabinet’.To digitalize this scenario, she follows the steps listed below:Gaze at a cabinetSelect the action ‘Open the cabinet’Select the assistance ‘Turn the cabinet light on’Select the action ‘Take a glass’Select the assistance ‘Turn the cabinet light off’Select the action ‘Close the cabinet’Once this scenario is complete, the designer tells the VA to resume the creation of
the ‘Drink water’ scenario. The ‘Take a glass from the
cabinet’ scenario is associated to the cabinet object and added to the
scenario. It will be available later if the designer wants to embed this specific
scenario into another one.Considering that the designer has already created the ‘Fill the
glass’ scenario as well, the whole process of digitalizing the
‘Drink water’ scenario would be:Go to the bedroom and gaze at the bedSelect the action ‘Get up’Go to the kitchen and gazing at a cabinetSelect the category ‘Glass actions’Select the scenario ‘Take glass’ (containing the
actions: ‘Open the cabinet’, ‘Turn the cabinet
light on’, etc.)Gaze at the faucetSelect the category ‘Glass actions’Select the scenario ‘Fill glass’ (containing the
actions:‘Put glass’, ‘Open faucet’,
‘Close faucet’)Go to the bedroom and gaze at the bedSelect the action ‘Lie down’This ‘Drink water’ scenario could later be included into a more global ‘Nighttime
wandering’ scenario. Embedding scenarios supports the natural hierarchical way of
thinking activities.[19] It also allows easier design by avoiding repetition for the designer.
Limitations
The results presented in this paper must be interpreted with caution as this study
presents some limitations.First and foremost, evaluation has yet to be performed either with researchers not
involved with the project or with caregivers and residents. Future evaluation will
lead to an in-depth discussion about the real-world usage of the VA highlighting its
strength and pitfalls through test-user surveys and interviews.Several concerns may then be resolved such as technology acceptance and
accessibility. However, as AR is becoming more popular and accessible and as the VA
is only used occasionally by the designer, ease-of-use and implementation choices
may have most of the impact on the acceptance.On the other hand, as accessibility can be linked to cost and since actual AR
headsets can be expensive, we suggest that the headset could be leased, especially
considering the sporadic use of such headset. Nonetheless, we also work on porting
our VA to more affordable and widespread hardware such as smartphones as stated in
the Implementation section.Finally, we acknowledge that building ADLs scenarios, thus decomposing tasks, can be challenging,[33] especially since the designer is not necessarily the resident herself. We
suggest that the VA could answer this difficulty by stimulating the Engagement-Reflection[34] cycle. Indeed, literature suggest that creative tasks are performed first by
producing content (engagement) and then by reflecting over the produced material
(reflection). By allowing the designer to go back and edit previous scenarios, the
VA will help reflecting on the material while the real-time advices could help
during the engagement. We are already developing the AI of the VA in this direction
and this work will be the subject of future papers.
Conclusion
In this paper, we described a DiY approach to design smart homes for assistance.
Without having knowledge in IoT, any designer, notably any caregiver, may determine
the assistance the resident needs in her home to complete ADLs. This assistance is
operationalized in a scenario of ADL that describes the tasks and actions necessary
to achieve it. The designer is accompanied during the design process by a VA. The
interaction with the VA allows the designer to describe the home and how the
resident carries out ADLs.Thanks to an ontology and AR, the home is digitalized according to both spatial and
semantic points of view. The designer may then easily describe how the resident
behaves in her home by going through the actions realized to complete a specific
activity.The scenario ‘Drink water’ illustrates how to use the VA, but also
shows how scenarios may embed other scenarios. This hierarchical description of
activities helps designing more complex scenarios. We show how it is easy to
integrate scenarios, which have been previously defined.Finally, we pursue the DiY approach that, in addition to offering an easy way for
designing smart homes, aims to create a community. Our objective is to build a
library of scenarios. As more people will be involved in designing smart homes, they
will be part of the smart home designing community and may share the scenarios they
have built.The next step of this research is to evaluate the current implementation of our
VA.We also want to make the VA evolve into a guide rather than a tool. To do so, we plan
to integrate an interactive agent that guides the designer, by highlighting
forgotten basic furniture, incomplete or incorrect scenarios, or by providing
navigation aid.Mainstream AR is at its beginning. However, the trend is to diversify the
applications and to make it more accessible. This offers a great opportunity to help
caregivers designing by themselves smart home for helping older people to stay
autonomous at home. This provides a good opportunity to facilitate aging at home,
despite the scarcity of health services and medical staff shortage.
Authors: Nathan M D'Cunha; Dung Nguyen; Nenad Naumovski; Andrew J McKune; Jane Kellett; Ekavi N Georgousopoulou; Jane Frost; Stephen Isbel Journal: Gerontology Date: 2019-05-20 Impact factor: 5.140