Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Sensor Fusion: Technical challenges for Level 4-5 self-driving vehicles

Add bookmark

Development of all kinds of next-generation radars, cameras, ultrasonic systems and LiDAR sensors is happening at unprecedented speed.

In order to facilitate the holy grail of level 4 and, eventually, level 5 self-driving vehicles, the automotive industry OEMs, and a host of legacy and start-up firms, has its work cut out to develop new sensor technologies that allow vehicles to see the road ahead – and to the side and behind.

But while developing the best new sensors is a key priority, creating ways of allowing them to function together in a system that ends up greater than the sum of its parts is arguably the even-greater challenge.

Sensor fusion is, therefore, an essential piece of the autonomy puzzle. But what is it exactly, and what are the challenges to overcome?

Sensor fusion explained

Sensor fusion is a concept that aims to “[provide] the vehicle with a combination of the best information available from each of its systems while ignoring the rest,” says expert Richard Vanhooijdonk.

This is because each of the radars, cameras and other sensors used by the vehicle for self-driving purposes has its own limitations, hence why each individual system must be combined in order to contribute to improved ADAS functions like cross-traffic assistance and obstacle avoidance.

For example, whilst camera systems are ideal for identifying roads, reading signs and recognizing other vehicles, LiDAR applications are superior when it comes down to accurately calculating the position of the vehicle, and radars perform better at estimating speed.

So, sensor fusion is the combination of these and other autonomous driving applications which, when smartly bundled and set up, give autonomous vehicles an all-encompassing and thorough 360-degree view of the environment.

Challenging times tying sensors together

As explained by the Institute of Electrical and Electronics Engineers in its paper Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles:

The heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing needs to be positively utilized for reliable and consistent perception of the environment through sensor data fusion.

However, this is far from easy. Take the fact that the industry is working on fusing distance data, in the form of a 3D point cloud, gathered by a LiDAR sensor, with the luminance data from a wide-angle imaging sensor. In addition, development is ongoing with LiDAR and camera detection fusion in real-time multi-sensor collision avoidance systems. Getting these different systems to communicate with one another in real time is quite the challenge.

As the Institute of Electrical and Electronics Engineers paper continues:

However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other.

Solving this is therefore key for engineers attempting to get the best of the multimodal data fusion concept, and how to make autonomous vehicles more reliable, accurate and safe.

How far along are we with AV sensor fusion?

As one might expect, given the promises that have been made about the advent of autonomous vehicles, developments in the whole of the applications field have been pretty remarkable thus far.

With the purpose of improving performance in poor weather, for instance, MIT recently developed a chip that utilizes signals at sub-terahertz wavelengths to sense objects through mist and dust.

“While infrared-based LiDAR systems tend to struggle in such conditions, the sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum, can easily be detected in fog and dust clouds,” explains Vanhooijdonk.

A transmitter in the system sends an opening signal and then measures with a receiver both the absorption and reflection of the sub-terahertz wavelengths that rebound. “The signal is then sent to a processor, which recreates the image of the object.”

One argument against an earlier introduction of sub-terahertz sensor technology into an AV was that the equipment able to produce a strong enough signal was too large and too expensive.

However, the MIT researchers have come up with a sensor small enough to fit onto a chip, with enough sensitivity to deliver information of substance, even under major signal noise. “In addition to calculating the distance of the object, the output signal can also be used to create a high-resolution image of the scene, which will be of crucial importance for autonomous vehicles,” adds Vanhooijdonk.

The problem with multimodal data

Despite these crucial developments, many challenges remain. One significant issue is the multimodality of data at an acquisition and data source level. The sensors have differences in physical units of measurement, in sampling resolutions, and in spatio-temporal alignment.

The uncertainty in data sources also poses challenges that include noise relating to calibration errors, quantization errors or precision losses; differences in reliability of data sources, inconsistent data and missing values.

In the particular case of LiDAR and camera sensor fusion, engineers find it hard to deal with the spatial misalignment and resolution difference in heterogeneous sensors.

The industry is working on more robust approaches for data fusion, which takes into account uncertainty in the fusion algorithm, and data fusion algorithms that work with minimal calibration since extrinsic calibration methods might be impractical due to exchange of data between all the sensors.

Furthermore, fusing data from different sources comes with other challenges, such as the difference in data resolution. LiDAR output is significantly lower compared to the images being processed by a camera. This is why the next stage of the data fusion algorithm is calculated to equal both the resolutions of LiDAR data and imaging data through an adaptive scaling operation.

For the case of collision avoidance usage, these processes become critical, so it comes as no surprise that engineers are working hard on real-time collision avoidance sensor systems designed not to drive into objects or individuals, utilizing a scanning LiDAR and a single RGB camera.

Given the seemingly optimistic predictions made about when level 4 and 5 vehicles will become publically available, engineers are currently facing a race against time to achieve robust, reliable and safe fusion of sensors.


RECOMMENDED