Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

What challenges does autonomous driving pose to ISO 26262 Part II?

Add bookmark
Peter Els
Peter Els
01/03/2018

Although rudimentary experiments on self-driving cars were already being carried out in the 1920s, the first self-sufficient and truly autonomous cars only made an appearance in the 1980s; with Carnegie Mellon University's Navlab, ALVs venture and Mercedes-Benz and Bundeswehr University's Eureka Prometheus Project in 1987.

Since those early demonstrations, the technology has matured to the point where Advanced Driver Assist Systems (ADAS), such as automatic lane keeping and adaptive cruise control, are standard on a number of vehicles. Beyond that, there are numerous fully autonomous vehicle projects in various stages of development, including extended on-road testing of Level 4 (SAE J3016) multi-vehicle fleets.  

With many of the systems deployed in these vehicles being deemed mission critical, will the arrival of autonomous vehicles change functional safety standards?

ISO26262

Image Source: Denver Post

Addressing functional safety in electric/ electronic systems, the initial version of ISO 26262 was published in 2011; and next year, Part II of the standard will be released. 

The revised edition largely seeks to address all activities of the safety lifecycle: Such as design and development of safety-related systems, and includes System on Chips that are classified as Safety-Elements-out-of-Context.

The goal is to minimize susceptibility to random hardware failures by defining functional requirements, applying rigor to the development process and taking the necessary design measures; including fault injection and systemic analysis and metrics reporting.  

However the updated standard does not address the issue of autonomous driving per se, leaving industry insiders wondering: What challenges does autonomous driving pose to the revised ISO 26262 functional safety standard?

ISO 26262 Part II, a good starting point

The automotive software industry is following in the footsteps of other industries that also rely on software and have stringent safety requirements, including medical, rail, and nuclear. Each has processes and certifications to help ensure consistency, quality, and above all, safety. 

For the automotive industry, the main standard is ISO 26262, “Road Vehicles – Functional Safety.” Modeled after IEC 61508 “Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems,” dealing specifically with the automotive sector and addresses the entire software lifecycle.

Software testing is all too often simply a bug hunt rather than a well-considered exercise in ensuring quality and functional safety. A more methodical approach than a simple cycle of ‘system-level test and ensuing fail-patch-test’ will be required to deploy safe autonomous vehicles at scale. 

The ISO 26262 development V process, is a good foundation from which to work, setting up a framework that ties each type of testing to a corresponding design or requirement document. However, adapting the process to deal with the types of novel testing problems that autonomous vehicles bring, also introduces several new challenges.

ISO26262

Image source: iso.org

It is a well-established safety principle that computer-based systems should be considered unsafe unless convincingly proven otherwise (i.e., safety must be demonstrated, not assumed). Therefore, autonomous vehicles cannot be considered safe unless and until they are shown to conform to ISO 26262. 

Working within the V Model to validate autonomous vehicles

An essential characteristic of the V model of development is that the right side of the V provides a traceable means of checking the result of the left side (verification and validation). However, this concept of scrutiny is based on an assumption that the requirements are actually known, are correct, complete, and unambiguously specified. 

In the world of the autonomous vehicle this assumption can be challenging.  

Using the V model as the basis for autonomous vehicle validation, there are five key challenge areas that arise: 

Driver out of the loop

Complex requirements

Non-deterministic algorithms

Inductive learning algorithms

Fail-operational systems

General solution approaches that seem promising across these different challenge areas include: 

Phased deployment using successively relaxed operational scenarios

Use of a monitor/actuator pair architecture to separate the most complex autonomy functions from simpler safety functions

Fault injection as a way to perform more efficient edge case testing 

A key high level argument is that regardless of the approach, it seems likely that there will need to be a way to detect when autonomous functions are not working properly (whether due to hardware faults, software faults, or simply not meeting the situational requirements), and to somehow bring the system to a safe state when such faults are detected via a failed-operation, degraded mode, autonomous capability. 

In addition, the use of heterogeneous redundancy (two modules: the monitor and the actuator) is intended to prevent a malfunctioning actuator from issuing dangerous commands. However, it also causes loss of the actuator function if something malfunctions - which is a problem for functions that must ensure safe operation in the fail-mode, such as steering a moving vehicle.  

However, detailed stipulation of safety-critical requirements can be impractical in autonomous applications for at least two reasons. One reason is that many requirements might be only partially safety related, and are inextricably entwined with functional performance. 

For example, the many conditions for operating a parking brake when a vehicle is moving could be a preliminary set of requirements. However, only some aspects of those requirements are actually safety critical, and those aspects are largely consequential effects of the interaction of the other functions. 

In the case of the parking brake, a deceleration profile when the parking brake is applied at speed is one of the desired functions, and is likely to be described by numerous functional requirements. But, the only safety critical aspect in the deceleration mode might be that the emergent interaction of the other requirements must avoid locking up the wheels during the deceleration process. 

The second reason that annotation of specifications to identify safety-relevant requirements may fail is that this might not be possible when machine learning techniques are used. That is because the requirements take the form of a set of training data that estimates a set of input values and correct system outputs. 

Unfortunately, ISO 26262 was not designed to accommodate technologies such as Machine Learning, thus creating a deliberation around the need to innovate and the need to improve safety.

Machine Learning adds complexity 

Appropriate behavior for autonomous vehicles is only possible if a complex series of perception and control decisions are made correctly. Achieving this usually requires proper tweaking of parameters, including everything from a calibrated model of each camera lens to the well-tuned weighting of the risks of swerving versus stopping to avoid obstacles on a highway. 

The challenge here is to find the calibration model or the ratio of weights such that the error function is minimized. In recent years, most robotics applications have turned to machine learning to do this, because the complexities of the multi-dimensional optimization are such that manual effort is unlikely to yield the appropriate levels of performance. 

However the use of ML can create new types of hazards. One such hazard is caused by the human operator becoming complacent because they think the automated driver assistance (often using ML) is smarter than it actually is. For instance, if the driver stops monitoring steering in an automated steering function. On one level, this can be viewed as a case of “reasonably foreseeable misuse” by the operator, and such misuse is identified in ISO 26262 as requiring mitigation. However, this approach may be too simplistic.

As ML creates opportunities for increasingly sophisticated driver assistance, the role of the human operator becomes increasingly critical to correct for malfunctions. But increasing automation can create behavioral changes in the operator, reducing their skill level and limiting their ability to respond when needed. Such behavioral impacts can negatively impact safety even though there is no system malfunction or misuse.

So as ISO 26262 Part II is prepared for introduction, it’s already set to come up short of being an all-encompassing standard when applied to autonomous driving. But in this rapidly changing environment that is no wonder: And will, no doubt, be addressed in future iterations.

Sources:

Rick Salay, Rodrigo Queiroz, Krzysztof Czarnecki; Arxiv.org; An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software; September 2017; https://arxiv.org/pdf/1709.02435.pdf

Riccardo Mariani; AutoSens; Applying ISO 2626 to ADAS and automated driving; August 2017; http://player.mashpedia.com/player.php?ref=mashpedia&q=joE9zbcrKAw

Patrick Londa; Weblogic Systems; Racing Towards Self-Driving Software and the Internet of Cars; September 2017; http://weblogic.sys-con.com/node/4145641

Lance Williams; Electronic Design; The Winding Road to Autonomous Vehicle Reality; May 2017; http://www.electronicdesign.com/automotive/gmsl-automotive-multistreaming-single-cable

RECOMMENDED