| Literature DB >> 33260831 |
Francesco Bellotti1, Nisrine Osman1, Eduardo H Arnold2, Sajjad Mozaffari2, Satu Innamaa3, Tyron Louw4, Guilhermina Torrao3, Hendrik Weber5, Johannes Hiller5, Alessandro De Gloria1, Mehrdad Dianati2, Riccardo Berta1.
Abstract
While extracting meaningful information from big data is getting relevance, literature lacks information on how to handle sensitive data by different project partners in order to collectively answer research questions (RQs), especially on impact assessment of new automated driving technologies. This paper presents the application of an established reference piloting methodology and the consequent development of a coherent, robust workflow. Key challenges include ensuring methodological soundness and data validity while protecting partners' intellectual property. The authors draw on their experiences in a 34-partner project aimed at assessing the impact of advanced automated driving functions, across 10 European countries. In the first step of the workflow, we captured the quantitative requirements of each RQ in terms of the relevant data needed from the tests. Most of the data come from vehicular sensors, but subjective data from questionnaires are processed as well. Next, we set up a data management process involving several partners (vehicle manufacturers, research institutions, suppliers and developers), with different perspectives and requirements. Finally, we deployed the system so that it is fully integrated within the project big data toolchain and usable by all the partners. Based on our experience, we highlight the importance of the reference methodology to theoretically inform and coherently manage all the steps of the project and the need for effective and efficient tools, in order to support the everyday work of all the involved research teams, from vehicle manufacturers to data analysts.Entities:
Keywords: collaborative project methodology; connected and automated driving; deployment and field testing; impact assessment; knowledge management; research data collection and sharing; vehicular sensors
Year: 2020 PMID: 33260831 PMCID: PMC7730337 DOI: 10.3390/s20236773
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
An example on definition of logging requirements for a hypothesis [7].
| Item | Example |
|---|---|
| Evaluation area | Technical and traffic |
| RQ level 1 | “What is the impact of the ADF on driving behaviour?” |
| RQ level 2 | “What is the ADF impact on driven speed in different scenarios?” |
| RQ level 3 | “What is the ADF impact on driven speed in driving scenario X?” |
| Hypothesis | Example 1: “There is no difference in the driven mean speed for the ADF compared to manual driving.” |
| Required Performance indicators (PIs) | Mean speed, standard deviation of speed, max speed, plot (speed/time) |
| Logging requirements/sensors available | CAN bus of vehicle: Ego speed in x direction |
Figure 1Overview of the research questions (RQ) definition and implementation workflow.
An overview of the L3Pilot vehicular sensor data performance indicators (PI) types.
| PI Type | Description | Example of PIs |
|---|---|---|
| Trip PI | PIs computed at trip level | Mean (stdev) longitudinal acceleration, percentage of time elapsed per driving scenario type |
| Scenario specific Trip PI | PIs computed at trip level but only when a specific driving scenario occurs. Example of driving scenarios, described later, are: driving in a traffic jam, lane change. | Mean duration of sections with speed lower than a threshold |
| Scenario instance PI | PIs computed for each instance of a driving scenario. The same PIs are computed in each type of scenario | Mean (stdev) time headway, mean(stdev) position in lane |
| Datapoint for a Following a lead vehicle scenario | Datapoint PIs are computed for each instance of a driving scenario. Different types of scenario have a different datapoint structure. Here we report two examples. Datapoints are used as input for the impact assessment by either resimulating driving scenarios or constructing artificial scenarios based on statistical analyses of scenarios encountered during piloting | Mean (stdev) relative velocity, Time headway at minimum time to collision |
| Datapoint for Approaching a traffic jam scenario | Vehicle speed at brake or steering onset, Longit. position of object at brake or steering onset |
Figure 2L3Pilot data workflow.
Measurify user roles and rights. General description and mapping in the L3Pilot case.
| Role | Description | L3Pilot Configuration/Notes |
|---|---|---|
| Providers | Provider users are data owners. They can upload data and retrieve only their own data. | In L3Pilot, Providers are vehicle owners or their in-depth analysis partner for vehicular sensor data and pilot leaders for subjective data |
| Analysts | Analyst users cannot upload data but can see all the data of their typology. | In L3Pilot, analysts are the experts responding to the research questions. Utilizing the Measurify’s Right resource, we have implemented three typologies, matching the type of relevant data: Technical and Traffic analysts, that access all vehicular sensor data apart from the Datapoints; Impact analysts, that access Datapoints; User analysts, that access subjective data |
| Admin | The admin configures the CDB (e.g., setting up the users and rights) and can see (only in case of need) all data entries | Given the adopted ID pseudonymization, the admin cannot resolve IDs (i.e., relating a data entry with its vehicle owner or driver). |
Figure 3Flowchart of the vehicular sensor data processing Matlab scripts.
Figure 4Example of scenario segmentation during a trip.
Figure 5Example data output in SPSS.
Figure 6Example of vehicular query with (dummy) results displayed in a table.