Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Intelligently developing and testing the AI safety case

Add bookmark
Peter Els
Peter Els
08/20/2018

After several years of testing, self-driving cars are finally being seen on the roads – sans ‘nanny-drivers’ behind the controls. Illustrating the rapid progress, the clear leader in highly automated (Level 4) driverless vehicles, Waymo, is about to increase its number of self-driving Chrysler Pacificas from about 600 to 62,600!

While this may be considered an extended trial, a 2017 Intel study prepared by Strategy Analytics predicts the economic opportunity created by autonomous vehicles (AVs) will grow from $800 billion in 2035 (the base year of the study) to $7 trillion by 2050.

In the short term, Mercedes has Level 4 fleet plans for 2020, and BMW and Ford are aiming to place AVs into mobility services by 2021. Other automakers have similar targets, including Hyundai (2021), Volkswagen (2022) and Volvo (2021).

And in March 2018, General Motors announced it would invest $100 million in
Michigan manufacturing facilities to produce Chevrolet Bolt EVs without steering wheels or pedals for mobility-fleet use beginning in 2019.

autopromotec_com

Image source: autopromotec.com

Because interest in AVs is being driven in part by a desire to reach zero fatalities worldwide, automated driving must be safer than human driving if the technology is to reach critical mass. That’s a tough task, particularly when it comes to the vehicle reacting quickly and properly in the one-in-a-million “edge cases” that are likely to arise, making the timeline for Level 5 capability far from certain.

While sensors, such as Lidar, Radar, ultrasonic and cameras will always form an integral part of any self-driving platform, many manufacturers are making meaningful progress in applying Artificial Intelligence models to AVs. The rapid development in machine learning and self-learning neural network AIs, in particular, is seen as way to improve functionality and cut development times in several key areas.

However, there are growing concerns over the integration of human factors and software failures into the qualitative and quantitative risk assessments that underpin safety-critical systems development.

According to Phillip Koopman, a computer scientist at Carnegie Mellon University who works in the automotive industry “You can’t just assume this stuff is going to work.”

When designing the objective function for an AI system, the designer specifies the objective but not the exact steps for the system to follow. This allows the AI system to come up with novel and more effective strategies for achieving its objective.

But if the objective function is not well defined, the AI’s ability to develop its own strategies can lead to unintended, harmful side effects.

Koopman is one of several researchers who believe that the nature of machine learning makes verifying that these autonomous vehicles will operate safely very challenging.

Challenges in developing the AI safety case

Traditionally, engineers write computer code to meet requirements and then perform tests to check that it meets them.

But with machine learning, which lets a computer grasp complexity, for example, processing images taken at different hours of the day, yet still identifying important objects in a scene like crosswalks and stop signs, the process is not so straightforward. Thus it is almost impossible to write the requirements for machine learning.

This is an inherent risk and failure mode of inductive learning so that looking inside the model to see what it does, only offers up statistical numbers. It’s a typical black box that doesn’t immediately divulge what is being learnt.

There’s also the challenge of ensuring that small changes in what the system perceives, perhaps because of fog, dust, or mist, don’t affect what the algorithms identify.

For instance, research conducted in 2013 found that changing individual pixels in an image, invisible to the unaided eye, can trick a machine learning algorithm into thinking a school bus is a building.

When it comes to object detection in autonomous driving, many researchers believe that brute-force supervised learning will not get us to L5 in our lifetimes. A 2016 report by RAND found that AVs would have to be driven billions of miles to demonstrate acceptable reliability. According to Toyota Research Institute’s CEO Gill Pratt, we need trillion-mile reliability.

On top of the long time horizon, the costs associated with a human labeling the objects generated from those miles are extremely high (approximately $4-$8 per image of 1920x1080 density for semantic segmentation, depending on quality of service), and just don’t scale.

In addition, there isn’t a strong supervised learning practice that can capture all of the edge cases in road conditions well enough to guarantee the level of safety demanded by autonomous driving.

Establishing the AI safety case

On the other hand, unsupervised learning can dramatically speed up training and definition of most safety cases related to AI functions.

This is typically achieved through clustering (grouping data by similarity), dimensionality reduction (compressing the data while maintaining its structure and usefulness), and recommender systems. Unsupervised learning is most commonly used to pre-process a dataset. This sort of data transformation-based learning is starting to be seen as the old-school way of utilizing unsupervised learning, and new forms are starting to emerge.

Building on the unsupervised methodology, many companies are adopting self-supervised learning. A unique form of ML, self-supervised learning has the ability to predict the depth of a scene from a single image, by using prior knowledge about the geometry of similar settings. It essentially creates geometric rules that supervise the network automatically.

This is very useful when building a safety case for an AV to navigate through real world traffic conditions: By using prior knowledge about time and causal reasoning it is possible to predict future frames of video from past ones. Behind every two-dimensional image, there is a three-dimensional world that explains it. When the 3D world is compressed down to a single 2D image, a lot of data is lost, and the way that it is compressed is not random. By harnessing the relationship between the 3D world and its compression into 2D images, it is possible to work backwards and input an image that allows an AV to understand the 3D world and how it interacts with the environment.

Another method of defining the safety case is to review in-car footage of critical AV performance in difficult conditions such as multi-lane roundabouts. By recording the AVs interaction with each specific traffic condition the data is labeled through natural/self-supervision, without the company having to pay a human to physically annotate the vehicle’s reaction to the road conditions.

Also, the spatial/temporal coherence of videos has a lot of latent structure that can be explored when determining appropriate testing.

Testing for the AI safety case

Most companies prefer a dual or ‘closed-loop’ approach to testing the safety of AI based driving systems. This requires a combination of simulation – either with machine in the loop (MIL) or driver in the loop (DIL) or both, and real world driving:

• Simulation allows developers to repeatedly run unlimited mileage under specific test conditions. By so doing the safety case for an AV navigating a multi-lane traffic circle, for instance, can be tested without having to negotiate the actual environment, thereby reducing risk and saving time.
• Once the AI performs all tasks as expected the test can be reproduced in the real world – either under controlled conditions at a test facility, or on the road. This stage is once again recorded for further evaluation, if required.

There is a growing tendency for developers to generate billions of miles of synthetic data for ML models to learn from. However, the gap between synthetic and real datasets may lead a network trained on synthetic data to not work well enough in the real world.

One solution, domain adaptation, allows the system to learn a well-performing model designed around a source data distribution that differs, but is related to, a target data distribution.

Virtual KITTI is an example of bridging the synthetic-to-real data gap by cloning the real-world in a virtual world through domain adaptation. Generative Adversarial Networks (GAN) are also a very promising and popular deep learning technique to learn how to efficiently make simulations more realistic.

Although the focus is obviously on creating and testing AI safety cases related to driving, the principles apply to other AI driven functions, such as gesture control or natural-language recognition such as Mercedes-Benz’s User Experience (MBUX) system.

Sources:
• James M. Amend; Wards Auto; Integrated Connectivity Critical to Future Mobility; July 2018; https://www.wardsauto.com/technology/integrated-connectivity-critical-future-mobility?
• Chris Abshire; Toyota AI ventures; Self-Supervised Learning: A Key to Unlocking Self-Driving Cars?; April 2018; https://medium.com/toyota-ai-ventures/self-supervised-learning-a-key-to-unlocking-self-driving-cars-408b7a6fd3bd
• Andrew Silver; IEEE Spectrum; Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe; January 2018; https://spectrum.ieee.org/cars-that-think/transportation/self-driving/why-ai-makes-selfdriving-cars-hard-to-prove-safe


RECOMMENDED