| Literature DB >> 35527794 |
Andrea Tocchetti1, Lorenzo Corti1, Marco Brambilla1, Irene Celino2.
Abstract
The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In our research, we frame the explainability problem from the crowds point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it's possible to improve the crowds understanding of black-box models and the quality of the crowdsourced content by engaging users in a set of gamified activities through a gamified crowdsourcing framework named EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users. We present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes. Future works will concretise and improve the current design of the framework to cover specific explainability-related needs.Entities:
Keywords: Explainable AI; crowdsourcing; explainability; game with a purpose; gamification
Year: 2022 PMID: 35527794 PMCID: PMC9075103 DOI: 10.3389/frai.2022.826499
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Figure 1Interaction flows of researchers (dashed cyan arrows) and users (orange plain arrows) with the activities devised within our framework, as described in Section 3. Researchers organize users' knowledge and set up activities to collect data. As users engage with such activities, they provide Content to researchers. In turn, researchers give the user feedback about the activity they performed. Such feedback aims to improve users' understanding of the activity itself, the knowledge and the context provided within it.
Figure 2The setup step of the gamified activity. Player 1 is provided with the category of the entity they have to guess (in this case, they have to guess an animal). Player 2 is supplied with a picture of the entity and its name (in this case, they are provided with the picture of a zebra).
Figure 3On the left, the Basic Turn of the gamified activity is displayed. Player 1 asks yes or no questions about the entity. Player 2 answers such questions. On the right, the Annotation Step is summarized. Player 2 is asked to complete a series of simple tasks to identify the guessed feature by answering questions and potentially annotating the picture.
The table represents the average and the sample m.s.e. per participant for each feature type and for each picture.
|
| |||
| Picture | “R” Features | “NR” Features | “A” Features |
| Crocodile | 3.6.7 ± 0.51 | 1.33 ± 1.63 | 1.5 ± 1.38 |
| Parrot | 5 ± 2 | 0.17 ± 0.48 | 2.17 ± 1.72 |
|
| |||
| Picture | “R” Features | “NR” Features | “A” Features |
| Crocodile | 3.83 ± 1.94 | 2 ± 0.63 | 0.17 ± 0.41 |
| Parrot | 6 ± 0.89 | 0 ± 0 | 0.83 ± 0.41 |
|
| |||
| Picture | “R” Features | “NR” Features | “A” Features |
| Crocodile | 0.5 ± 0.55 | 1.17 ± 0.75 | 3.33 ± 0.81 |
| Parrot | 1.5 ± 0.55 | 0 ± 0 | 2.67 ± 1.51 |
The table is organized depending on the groups described in Section 4.