| Literature DB >> 27223288 |
Ziwei Liu1,2,3, Xiaoguang Niu4,5, Xu Lin6, Ting Huang7, Yunlong Wu8, Hui Li9.
Abstract
In a densely distributed mobile crowdsourcing system, data collected by neighboring participants often exhibit strong spatial correlations. By exploiting this property, one may employ a portion of the users as active participants and set the other users as idling ones without compromising the quality of sensing or the connectivity of the network. In this work, two participant selection questions are considered: (a) how to recruit an optimal number of users as active participants to guarantee that the overall sensing data integrity is kept above a preset threshold; and (b) how to recruit an optimal number of participants with some inaccurate data so that the fairness of selection and resource conservation can be achieved while maintaining sufficient sensing data integrity. For question (a), we propose a novel task-centric approach to explicitly exploit data correlation among participants. This subset selection problem is regarded as a constrained optimization problem and we propose an efficient polynomial time algorithm to solve it. For question (b), we formulate this set partitioning problem as a constrained min-max optimization problem. A solution using an improved version of the polynomial time algorithm is proposed based on (a). We validate these algorithms using a publicly available Intel-Berkeley lab sensing dataset and satisfactory performance is achieved.Entities:
Keywords: data integrity; data prediction; mobile crowd sensing; participant selection; task-centric
Year: 2016 PMID: 27223288 PMCID: PMC4883437 DOI: 10.3390/s16050746
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Active participants in a clustered MCS.
Figure 2Data correlation vs. distance.
Figure 3Simulation setup and environmental observations in Intel Berkeley Research lab. (a) Sensor deployment and grid-based task area; (b) Possible task areas and participant-moving areas.
Figure 4Number of active participants vs. loss threshold.
Figure 5Number of participated tasks for each participant within 8 days.
Figure 6Testing approximation errors of IPSA. (a) 10 active participants (training loss threshold = 0.025); (b) 15 active participants (training loss threshold = 0.015); (c) 20 active participants (testing loss threshold = 0.005).
Figure 7Approximation errors comparison of IPSA vs. RS, LDS.
Figure 8Active participants’ distribution by IPSA, RS, and LDS. (a) IPSA; (b) RS; (c) LDS.