Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Rethinking Autonomous Vehicle Functional Safety Standards: An Analysis of SOTIF and ISO 26262

Add bookmark
Peter Els
Peter Els
03/25/2019

NVIDIA is one of the market leaders in providing technology for self-driving cars. This also includes safety compliance for ensuring all systems work according to international regulation.

According to a 2018 Cox Automotive consumer survey, nearly half of the 1,250 people surveyed said they would never buy a Level 5 (or fully autonomous) vehicle – up from 30 percent of the 2,264 people polled two years earlier.

At the same time, consumers’ vehicle-autonomy preferences dropped from Level 4 to Level 2, while the number of respondents who believe roads would be safer if all vehicles were fully autonomous versus operated by people, also dropped 18 percentage points between 2016 and 2018.

Notwithstanding consumer sentiment, a recent Rand Corporation report concluded that hundreds of thousands of lives could be saved with the widespread adoption of autonomous vehicles, even though they may not yet be accident-proof.

Looking past ISO 26262

Although highly automated vehicles come with a host of advanced sensing technologies, such as cameras, radar and lasers, the ability of the sensors to function perfectly in all weather conditions remains problematic. In particular, the Artificial Intelligence (AI) and Machine Learning (ML) software and algorithms on which automated driving is based are extremely complex, and because they operate as a ‘black box’ they require testing to standards designed to ensure their safe operation.

While ISO 26262 has up until now set the standard for functional safety by seeking to eliminate electric/electronic systems (E/E) malfunctions, the safety of automated-driving systems is not only related to E/E failures – it is also linked to other factors such as the conceivable misuse of the function by the driver, or the performance limitations of sensors or systems, or even due to unanticipated changes in the vehicle’s environment.

In fact, to date, the majority of incidents involving AVs in self-driving mode have been caused by software and system design or engineering shortcomings, and not E/E errors. And with the increased use of AI and ML, verifying software functional safety becomes even more complex.

The difficulty in verifying highly automated systems can be appreciated when one considers the huge volumes of, often safety critical, data that is fed to algorithms by the sensor array.

Because these algorithms are often highly complex and difficult to analyze, solutions from the fields of Deep Learning AI, ML, and statistical signal processing are typically applied. These approaches share a probabilistic paradigm.

Depending on the method used, the probabilistic factors may or may not need to be explicitly treated when solving the problem. For some methods the answers are described in terms of probability distributions, while for other methods probabilistic factors affect the solution implicitly and are not indicated in the answers.

This is especially the case with deep learning, where labeled data is used to train an artificial neural network to solve the problem. Consequently, safety concerns become increasingly difficult to tackle when the solutions to complex automation problems are based on deep learning or machine learning that involve elements which are non-deterministic, and difficult to inspect for correctness.

This has prompted regulators, safety lobbyists and the industry in general to expand the standards to address the validation and the verification of systems with complex sensing and AI algorithms, whose limitations in performance could cause safety hazards in the absence of a malfunction.

Smart functional safety for AI driven systems

One such standard, the draft ISO PAS 21448, seeks to address safety issues relating to electronic systems that govern the safe operation of a vehicle, rather than focusing on E/ E system malfunctions, thereby complimenting ISO 26262’s functional safety role.

Furthermore, in AI and ML-driven systems, the difficulty of pinpointing why a machine-learning application does something makes troubleshooting or prediction almost impossible. According to Norman Chang, CTO of ANSYS: “The nature of a deep neural network is that it is like a black box; it’s not easy to see what’s going on inside if you need to fix anything.”

It’s for this reason that ISO 21448 concentrates on the “unknown and unsafe” operations, where the risks can only be reduced through testing, simulation and the use of statistical analysis.

Originally intended as Part 14 of ISO 26262, the scope and complexity of the “Safety of the Intended Functionality (SOTIF)” standard, delayed the release of the revised version of ISO 26262 to such an extent that it was eventually submitted as a new stand-alone SOTIF draft, ISO PAS 21448.

PAS 21448 is an ISO TC 22/SC 32/WG8 initiative that proposes “guidance on the design, verification and validation measures applicable to avoid malfunctioning behavior in a system in the absence of faults, resulting from technological and system definition shortcomings.”

Consequently, PAS 21448 claims that safety violations in a system without failure are outside the scope of ISO 26262, and therefore seeks to reduce the following safety threats:

  • Residual risk of the intended function, through analysis
  • Unintended behavior in known situations through verification
  • Residual unknown situations that could cause unintended behavior, through validation of verification situations

While a more definitive guideline would be useful, tech providers have no choice but to go with the most prevalent solution, which is currently to simulate the road miles and generate enough content to test the AVs from every conceivable angle. This includes the AI and ML algorithms’ functional safety performance.

NVIDIA Drive – developed with SOTIF in mind

In order to speed up the testing and validation of the SOTIF of autonomous vehicles NVIDIA has developed what is claimed to be the world’s first functionally safe AI self-driving platform, NVIDIA Drive.

The architecture uses redundant and diverse functions to enable vehicles to operate safely. Even in the event of faults related to the operator, environment or systems, the holistic safety approach, that includes process, technologies and simulation systems, is aimed at minimizing risk. Although process and, processor design and functionality, are key to achieving the intended levels of functional safety, the company has focused much of the development on software, AI algorithms, and virtual reality simulation to ensure the SOTIF of the system:

  • Software: Integrates world-leading safety technology from key partners. Such as BlackBerry QNX’s 64-bit real-time operating system, and TTTech’s MotionWise safety application framework. The NVIDIA toolchain, including the CUDA compiler and TensorRT, uses ISO 26262 Tool Classification Levels to ensure a safe and robust development environment.

  • Algorithms: The NVIDIA DRIVE AV autonomous vehicle software stack performs functions like ego-motion, perception, localization and path planning. To realize fail operation capability, each functionality includes a redundancy and diversity strategy. For example, perception redundancy is achieved by fusing lidar, camera and radar.

    Deep learning and computer vision algorithms running on CPU, CUDA GPU, DLA and PVA enhance redundancy and diversity. The NVIDIA DRIVE AV stack is a full backup system to the self-driving stack, enabling Level 5 autonomous vehicles to achieve the highest level of functional safety.

  • Virtual Reality Simulation: Road testing is not sufficiently controllable, repeatable, exhaustive or fast enough to prove that the self-driving system does what it is designed to do (SoTIF) over a wide range of situations and weather conditions.

    Therefore NVIDIA has created a virtual reality simulator, called NVIDIA AutoSIM, to test the DRIVE platform and simulate rare ‘edge case’ conditions. Running on NVIDIA DGX supercomputers, NVIDIA AutoSIM is repeatable for regression testing and is capable of simulating billions of miles.

So while ISO 21448 is expected to have a significant impact on the SOTIF of automated vehicles, the real benefit will be seen in the functional-safety validation of systems enabled by AI and ML software. And by conforming to the Safety of the Intended Functionality, AVs may just live up to human expectations regarding safety, as uncovered in research carried out in China.

As reported, Peng Liu and Run Yang of Tianjin University and Zhigang Xu of Chang’an University asked 499 people in the city of Tianjin to rate the level of risk they were willing to accept when it comes to riding in a car with a human driver or a self-driving car. The tolerance for risk was expressed either in terms of fatalities per kilometers driven or fatalities per population size.

Respondents were asked to accept or reject each traffic risk scenario at one of four levels — never accept, hard to accept, easy to accept, or fully accept, according to Science Daily.

The results, recently published in the journal Risk Analysis, showed that people are willing to accept autonomous vehicles if they are 4 to 5 times safer than a car with a human driver. In other words, they should be able to reduce the danger of death or injury while driving by 75 to 80 percent – which is exactly what ISO 21448 sets out to accomplish.


RECOMMENDED