Regulation & Safety in Automotive: ISO 26262
Safety is the top concern in the automotive industry. With the continuing rapid advancement of autonomous vehicles, its complex technology requires further assurances.
More complicated, modern engine systems can include millions of lines of software code and hundreds of EDUs. As such, ADAS is a requisite in the construction of more complex self-driving systems.
Further, the goal is the proper implementation of ISO 26262, the international standard for the functional safety of electrical and or electronic systems in production automobiles.
Using a step-by-step approach, ISO 26262 regulates product development on hardware and software. It's a series of recommendations and regulations, followed by the product's concept and continuing throughout its development. It details how to assign acceptable risk levels to systems and components as well as the documentation of the overall testing process.
More specifically, ISO 26262 provides services in several areas to manage functional safety and regulation product development. Details include: an automotive safety lifecycle; a specific risk-based approach to determine risk classes; uses ASILs to specify necessary safety requirements for achieving an acceptable residual risk; and provides requirements for validation and confirmation measures to ensure a sufficient and acceptable level of safety is achieved.
A safety mechanism, in the context of ISO 26262, is a technical solution implemented by E/E functions or elements, or by other technologies, to detect faults or control failures to achieve or maintain a safe state.
Such safety mechanisms include error correction code (ECC), cyclic redundancy check (CRC), hardware redundancy, and built-in-self-test (BIST).
The effectiveness of a safety mechanism to detect random failures is measured by metrics to detect fault and failure in time as well as the overall likelihood of risk.
Autonomous vehicle safety is so intensely pursued, a new standard further considers the potential issues. The complementary standard is ISO 21448, and it's described as "Safety of the Intended Functionality" or SOTIF.
The acronym's definition: "The absence of unreasonable risk due to hazards resulting from functional insufficiencies of the intended functionality of foreseeable misuse by persons."
As the second safety standard developed to oversee the integrity of the autonomous industry, SOTIF was developed to mitigate unreasonable risk for vehicles and advanced driver-assistance systems (ADAS) when they encounter problems on the road. It's even relevant in instances where the hardware and software haven't been problematic.
The reasons include inadequate sensor configuration, unexpected changes in the environment and misuse of functions by the vehicle driver's inability of AI-based systems to accurately interpret the situation and operate safely.
As an overall concept, SOTIF is the framework for identifying hazardous conditions and a method for verifying and validating the behavior until there's an acceptable level of risk.
But there's still also an unknown in the system to random failures and from single-point metrics to software design and coding errors to inadequate testing failures.
Consider several prominent problem areas:
System failures: Typically a problem with an item or function occurring "dramatically" during the development, the manufacturing process or maintenance. The problems can be addressed by and change in the design or the manufacturing process. operational procedures, documentation or other relevant factors.
Random failures: Classified into permanent faults and transient faults and throughout the lifetime of a hardware element, random failures emanate from defects innate to the process of usage conditions.
Random failures can be addressed during the design and verification of the hardware and software systems by introducing safety mechanisms to make the architecture able to detect and correct malfunctions.
Single-point fault metric: This metric reflects the robustness of an item or function to the single point faults either by design or by coverage from safety procedures.
Latent fault metric: This metric reflects the robustness of an item or function against latent faults either by design, fault coverage by via safety procedures, or by the driver’s recognition of a fault’s existence before the infraction of a safety objective.
Probabilistic metric of hardware failures: This metric provides rationale that the residual risk of a safety goal violation due to random hardware failures is sufficiently low.
In terms of software, systemic failures typically occur due to human errors during different product development life cycle phases. They can often be traced back to a root cause and corrected.
Software design and coding errors: Poorly embedded copies, incorrect queries, syntax, timing and algorithm errors can all be problematic as well as lack of self-tests
Inadequate testing errors: Sometimes software appears to have past testing criteria when it failed to perform the required task.
Requirement specifications and communication errors: Among the largest and most common error sources, it occurs in two ways. The software can execute correctly gut the required into the requirement hasn't been properly defined. The software developer can also simply not correctly understand the requirement.
Errors due to software changes: If new software is introduced, issues can occur when there's a failure in the configuration of the control panel.
Timing errors: Sometimes, the software can perform the correction function but at the wrong time or when not appropriate.
Verification and verification of hardware and software systems are paramount to the top priority of the advancement of autonomous driving — safety.
It's further acutely important as artificial intelligence is increasingly used to make decisions. Those decisions, the industry at large knows, are only as good as the data provided.
Interested in learning more? Advance your knowledge and leave with the necessary tools to stay ahead of the industry when you attend the Pre-Conference Safety Masterclasses at Autonomous Vehicles Silicon Valley! Break into groups with peers for a collaborative working session and apply the framework to different use cases:
MASTERCLASS A: SOTIF AND ISO 21448: DEFINING PRINCIPALS OF FUNCTIONAL SAFETY presented by Dr. Hakan Sivencrona, Safety Program Manager, Zenuity
MASTERCLASS B: CREATING A SAFETY MANAGEMENT SYSTEM FOR A THIRD-PARTY VEHICLE presented by Ed Straub, Director of Automotive, SAE
MASTERCLASS C: ADVANCING ON-ROAD TESTING PRACTICES THROUGH J3018_201919 presented by Kelly A. Nantel, Vice President of Communications and Advocacy, National Safety Council
PLUS the first 20 people to register for the Pre-Conference Masterclasses will gain exclusive access to the Prospect Silicon Valley Site Tour!