A hybrid-AI approach for competence assessment of automated driving functions
[S.l.] : CEUR-WS
InAAAI Workshop on Artificial Intelligence Safety 2021, pp. 1-19
AAAI Workshop on Artificial Intelligence Safety 2021 (8 February, 2021)
Article in monograph or in proceedings
Display more detailsDisplay less details
SW OZ DCC AI
AAAI Workshop on Artificial Intelligence Safety 2021
SubjectCognitive artificial intelligence
An increasing number of tasks is being taken over from the human driver as automated driving technology is developed. Accidents have been reported in situations where the automated driving technology was not able to function according to specifications. As data-driven Artificial Intelligence (AI) systems are becoming more ubiquitous in automated vehicles, it is increasingly important to make AI systems situational aware. One aspect of this is determining whether these systems are competent in the current and immediate traffic situation, or that they should hand over control to the driver or safety system. We aim to increase the safety of automated driving functions by combining data-driven AI systems with knowledge-based AI into a hybrid-AI system that can reason about competence in the traffic state now and in the next few seconds. We showcase our method using an intention prediction algorithm that is based on a deep neural network and trained with real-world data of traffic participants performing a cut-in maneuver in front of the vehicle. This is combined with a unified, quantitative representation of the situation on the road represented by an ontology-based knowledge graph and firstorder logic inference rules, that takes as input both the observations of the sensors of the automated vehicle as well as the output of the intention prediction algorithm. The knowledge graph utilises the two features of importance, based on domain knowledge, and doubt, based on the observations and information about the dataset, to reason about the competence of the intention prediction algorithm. We have applied the competence assessment of the intention prediction algorithm to two cut-in scenarios: a traffic situation that is well within the operational design domain described by the training data set, and a traffic situation that includes an unknown entity in the form of a motorcycle that was not part of the training set. In the latter case the knowledge graph correctly reasoned that the intention prediction algorithm was incapable of producing a reliable prediction. This shows that hybrid AI for situational awareness holds great promise to reduce the risk of automated driving functions in an open world containing unknowns. Automated driving is one of the most appealing applications of artificial intelligence in an open world. It holds the promise of reducing the number of casualties (1.35 million yearly (WHO 2018)), increasing the comfort of travel by taking over the driving task from humans, and bringing mobility to those unable to drive. While fleets of fully automated vehicles that can run unrestrained in an open world are still far away (Koopman and Wagner 2016), many vehicles are already equipped with Advanced Driver Assistence Systems (Okuda, Kajiwara, and Terashima 2014), like Lane Keep Assist and Adaptive Cruise Control. According to The Geneva Convention on road traffic of 1949 and the Vienna Convention on road traffic 1968, on which many countries base their national traffic laws, a human driver has to be present in the vehicle (Vellinga 2019). Artificial Intelligence (AI) opens up the possibility of automation in increasingly complex situations, but also makes it increasingly complex for human drivers to understand the limitations of the system (Thill, Hemeren, and Nilsson 2014). The tremendous success of Deep Neural Networks (DNNs) in the recent years (LeCun, Bengio, and Hinton 2015) has lead to many applications in automated driving, ranging from perception (Cordts et al. 2016) and trajectory prediction (Deo and Trivedi 2018) to decision making (Bansal, Krizhevsky, and Ogale 2019). The strength of DNNs is the capability to deal with complex problems, but one important drawback for their application in safety-critical systems is how they deal with new situations (Hendrycks and Gimpel 2017; McAllister et al. 2017). DNNs learn a (possibly very complex) mapping from input data to output, but they lack an understanding of the deeper causes of this output. Hence, these algorithms cannot reason about whether they are competent to produce reliable output based on the input data. To safely apply DNNs (or any learning algorithm) in automated vehicles, we need to add situational awareness: the comprehension whether the system understands the current environment and is capable of producing reliable output. In this work we describe a hybrid-AI approach (van Harmelen and ten Teije 2019; Meyer-Vitali et al. 2019) to situational awareness. In this approach, a data-driven AI is coupled to a knowledge graph with reasoning capabilities. The current application is a DNN that predicts the intention of other road users to merge into the lane of the ego vehicle (cut-in maneuver). This is combined with a knowledge graph of the traffic state that relates the current situation to what the predictor has learned from the training data. The knowledge graph reasoner returns an estimate on the reliability of the predictor, which it forecasts into the immediate future (2 seconds ahead) to be able to warn the driver or safety system in advance that takeover of control is imminent in the near future.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.