Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Are We There Yet? Autonomous System Safety and Sensor Accuracy vs. ADAS Complexity

Add bookmark
Graham Heeps
Graham Heeps
10/10/2016

The fatal accident involving Tesla’s semi-autonomous Autopilot technology has heightened public awareness of the safety implications of handing vehicle control to sensors and software. What are specialist suppliers doing to ensure that, as greater numbers of sensors and additional processing complexity is built into vehicles, the risk of accidents is decreased, not increased?

A robust system begins with robust sensors, according to Louay Eldada, the CEO of LiDAR supplier Quanergy, whose S3 product will go into production with Sensata at the end of 2016, ahead of potential vehicle launches a couple of years down the road. He identifies the need for physical robustness in the sensor – in the case of Quanergy, a solid-state LiDAR scanner that’s good for more than 100,000 hours of operation – as well as robustness in the information it supplies. Like others, he’s convinced that the addition of LiDAR to the sensor mix will deliver the accuracy needed for greater levels of vehicle autonomy.

© chombosan - fotolia

© chombosan - fotolia

“When you generate an object list that’s refreshed maybe 30 or 50 times per second, every one of those objects has to be correct,” he says. “You cannot have false positives or false negatives, which are equally bad.” 

Redundant systems for reliable information

“The idea of dual or triple redundancy using independent sensing technologies for automated driving, or at least two in the long term for the most robust automated emergency braking (AEB) system, will give us the opportunity to eliminate as many false positives as possible,” adds Andy Whydell, director of product planning for Global Electronics at ZF TRW. “In the case of a two-sensor system for AEB, we can tailor our algorithms based on parameters such as vehicle speed and the detected object. I might choose to be more aggressive if my camera detects a pedestrian stepping out in front of me at low speed, for example. Even with only one input, I might start a response, maybe a brake pre-fill or a first-level braking, while I wait for a second sensor to confirm, because any speed reduction will give a direct benefit if a pedestrian is really there.”

© ZF TRW

© ZF TRW

“As we add sensors, we add complementary technologies that help us get the redundancy we need and give us more confidence in the information about what’s outside the vehicle,” agrees Dean McConnell, director of customer programs for Continental’s ADAS business unit in North America. “That said, you still need hardware and software redundancy to meet the intent of the requirements in ISO 26262. It’s a significant effort to validate it [all] and verify that you are meeting those requirements. We do as much simulation as we can, but the customers to whom we provide product have to do a lot of vehicle-level testing and validation to ensure that they do meet the intent of the functional safety management level. That level will ramp up significantly as we move from discrete ADAS functions to automated functions – from a component level towards vehicle-level functionality.”

Meeting the safety goals

Limiting or disabling certain ADAS or autonomous functions when a sensor is faulty, unavailable or cannot do its job, is viewed as standard practice by all of the experts we spoke to. Scenarios to be considered might include accident damage, ice build-up on a front-mounted radar, snow obscuring road lane markings, or a dead insect on the windscreen obscuring a camera. All can be handled by a combination of sensor diagnostics and processing intelligence. But how about handling violations of a safety goal without a sensor failure having occurred, in order to ensure that introducing new functionality does not increase the risk to the general public?

“This falls outside of the ISO 26262 functional safety standard and is referred to as Safety of the Intended Functionality (SOTIF),” explains Whydell. “ZF TRW has extensive experience identifying, analyzing and complying with SOTIF requirements and participates in standards discussions on this topic.

“In an AEB system, for example, these failures are covered by the algorithms to reject noise and distinguish true targets and potential collisions. First, video simulations with valid and invalid contexts are run to prove the algorithms. Then this is validated by simulation and testing, based on the risk of harm due to a collision. If the system has very little braking authority, less validation is needed. High braking authority requires many validation miles and scenarios to have confidence that the risk is not significant. ZF TRW performs simulations of thousands of braking scenarios to determine the risk.”

[inlinead]

Minimizing latency to increase performance and reliability

Mike Thoeny, Delphi’s managing director of Electronic Controls Europe, points to his company’s RACam unit, which combines a radar, camera and shared processing in a single unit behind the windshield, as an example of how latency in communications between sensors and processors can be reduced. Another benefit of RACam, which entered production with Volvo in 2015 and will be introduced by Renault this year, is that it moves the radar to a protected spot behind the windscreen, away from the front of the car where, for example, ice build-up in cold conditions can interfere with its performance.

Next year, Delphi’s multi-domain controller – a central ECU that relieves sensors of the processing burden and therefore speeds up the overall system response – will enter production with Audi as the zPAS controller architecture. This flexible, scalable setup should enable OEMs to more easily upgrade their ADAS and autonomous systems mid-cycle, offer over-the-air software updates, or respond to new requirements and tests from bodies like Euro NCAP. It should also help sensors to continue to provide their intended safety function at a time when the complexity of ADAS and semi-autonomous systems is rising.

“When the sensors are doing the processing and then sending the data, there’s always some latency to the processor that’s doing the vision algorithm,” says Thoeny. “But with all of the raw data coming into the central controller, we can process and act upon it without any latency. It gives very precise information of where the vehicle is, and makes all of the functions work better.”

Testing, testing

Every expert we spoke to for this story stressed that, when it comes to verifying the performance of ADAS or autonomous systems, there’s no substitute for real-world testing. By way of example, Delphi typically racks up more than 1 million road kilometers during an OEM program, even without the OEM’s own road testing.

“We design against our own learning, not just what the customer specifies in their top-level requirements, or Euro NCAP targets,” Thoeny explains. “We use the functional safety/hazard analysis to identify areas that can be problematic for the system. These become areas of extended focus in our verification activities. You must combine the theoretical analysis and lab testing with testing on real roads.”

He notes that Delphi has more than 15 years worth of use-case data to fall back on, having launched its first production radar on the Jaguar XKR in 1999. “Those years of experience in verifying these systems have shown us that these real-world scenarios are hard to predict, but with all the production programs that we have now, we can predict a lot of this,” he says. “That also helps to build the intelligence of the systems as we move into more complicated scenarios, and build confidence that these systems will operate properly across a wide range of inputs.”

Despite this growing bank of experience, it’s clear that there’s a way to go to define the best designs and test methods in a fast-developing field of technology where regulatory standards or third-party test procedures are still a rarity.

“We’re working closely with all of our customers to try to define the best test methods, not just the best product design,” says Continental’s McConnell. “We’re still learning how to ensure that the proper test methods prevent something from getting into the field that shouldn’t have.

Still a long way to Vision Zero

“By the same token, I think we have a long way to go on a system level as you add functionality and complexity. There’s new sensor technology, too – we have invested in Hi-Res 3D Flash LiDAR (Conti recently bought the business from Advanced Scientific Concepts, Inc,) to begin to prepare for that future need in the market. Beyond that, we can look to V-2-X or V-2-V or other information to come from outside the vehicle as another input for the decision making. All those things are going to help improve the intelligence of the system and help us get to Vision Zero.”

So will we ever see the day when cars can boast 100% sensor certainty? Quanergy’s Eldada says that 100% certainty is unrealistic – but nor does it matter. “Nothing in life is 100%,” he reasons. “The question becomes, how many 9s do you have? Right now, video and radar-based systems do not even have a 90% rate of [detection] accuracy. With LiDAR, we assess that today we are at 99.99%. We believe we can add many more 9s over time, as we invest more. The number of accidents will never be zero, but the key thing is that the number will be reduced over time by improving the systems.”

You might also be interested in the following articles:

The rollout of self-driving cars hits a bump in the road

What happens when OEMs stop selling cars?

Keeping autonomous vehicles safe at system level


RECOMMENDED