Will LiDAR give fully autonomous cars a clear vision of the road ahead?

Peter Els

After the tragic May 2016 accident involving a Tesla Model S which crashed into a trailer while Autopilot was engaged, Tesla speculated that a possible cause of the accident was the difficulty of the car's cameras to identify the white trailer against the bright Florida sky.

As this was the first fatality recorded by a vehicle while in self-driving mode, it’s natural that this comment by the manufacturer would elicit debate, especially after Tesla CEO, Elon Musk, had previously dismissed the need for LiDAR (Light Detection And Ranging), suggesting the technology “didn’t make sense” in the context of a car.

However not everyone agreed: According to German supplier ZF Friedrichshafen’s CEO, Stefan Sommer, self-driving cars require multiple detection systems, including expensive LiDAR technology, if they are to be safe at high speeds.

"For autonomous driving, we will need three core technologies: picture processing camera technology, short and long-range radar, and LiDAR," Sommer added.

LiDAR adds context to imaging

LiDAR technology can identify the contours and contrasts of obstacles which normal cameras are unable to detect, particularly in low light and low contrast situations.

In a graphical demonstration of LiDAR’s low light performance, Ford recently navigated a Ford Fusion Hybrid autonomous research vehicle with no headlights along a twisty stretch of desert road at night, guided only by LiDAR.

The test demonstrated that even without cameras, which intrinsically rely on light, Ford's LiDAR coupled to the car’s virtual driver software, is sensitive enough to flawlessly steer around objects in total darkness.

In operation, LiDAR sends out short pulses of invisible laser light, and measures the time the pulses take to return to the sensor. From this, both the intensity of the target and distance can be measured with excellent accuracy. The results thus obtained can be used to construct a 3-D map of the vehicle’s surroundings.

However, the technology still faces several challenges:

  • Currently, the technology is very expensive. High resolution LiDARS are made in small quantities and cost more than a car
  • Newer LiDARs are now appearing at sub-$1,000 price points)
  • The image resolution is marginal. Currently, images are typically resolved at 64 pixels high, at about a 10hz rate
  • Operating range is limited. Typical LiDARs see well to about 70m, but can identify larger objects, like cars, to around 100m. 1.5 micron LiDARS, which are even more expensive, can see further
  • Scanning LiDARs have moving parts so they can scan the world, but this adds complexity
  • Refresh rates tend to be sluggish 
  • Scanning LiDARs are also sensitive to distortion due to the movement of the scanning car and the movement of the objects being scanned.
  • Reduced LiDAR efficacy in adverse conditions such as rain and snow
  • LiDARs require a clear view of the scan-path to function optimally. This often complicates packaging

Despite these challenges, in automotive applications scanning LiDAR is still considered to be critical for self-driving vehicles because of their ability to accurately map the environment around the car.

 photo Uber_zpswukg7pas.png

Scanning LiDAR creates panoramic vision

LiDAR sensors measure distances by measuring the Time of Flight (ToF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light, or approximately 0.3m per nanosecond.

In a scanning LiDAR, vertical fields of view of 30° to 40° are covered with a full 360° horizontal field of vision by rotating the laser/detector pairs up to 20 times per second. By fusing multiple laser/detector pairs (up to 64) into one sensor and pulsing each at 20 kHz, measurements of up to 1.3million data points per second can be recorded.

In addition to distance measurements, high quality LiDAR sensors also accurately measure reflectivity that allows for easy detection of retro-reflectors like street-signs, license-plates, and lane-markings.

Whilst the ability to obtain a 360 degree 3D map of a vehicle’s environment in real time is important, several suppliers are turning to modern forward-facing flash LiDAR for effective autonomous vehicle navigation.

Flash LiDAR keeps eyes on the road

3D Flash LiDAR operates very much like a 2D digital camera, but the additional 3D focal plane arrays provide the additional depth and intensity required for accurate environment detection. In Flash LiDAR, each pixel records the time the camera’s laser flash pulse takes to travel to the target and bounce back to the camera’s focal plane (sensor). A short duration, large area light source (the pulsed laser) illuminates the objects in front of the focal plane as the laser photons are “back scattered” towards the camera receiver. This energy is collected by an array of smart pixels, where each pixel samples the incoming photon stream and renders depth (3D) and location (2D), as well as reflective intensity.

Furthermore, individual pixels have independent triggers and counters to record the ToF of the laser light pulse to the object(s). From this, the physical range of the object in front of the camera is calculated and a 3D point cloud frame is generated at video rates of up to 60 frames/second. Currently, 20 or 44 analog samples are captured for each pixel per pulse, allowing for accurate pulse profiling. Because of the physics involved with the velocity of light, the accurate range data is a direct and simple calculation (as opposed to stereoscopic camera systems with range interpolated based on lens disparity).

The 16,384 data points per single flash (frame) that the 3D Flash LiDAR cameras capture allow for high-rate dynamic scene captures and 3D videos that other scanners are unable to accomplish. Without moving or other mechanical parts to add weight and complexity, the cameras are small, light and durable, and largely free of motion distortion.

Even though the benefits of flash and scanning LiDAR cannot be ignored, the costs involved are simply too high to drive a significant growth in the market. As Stefan Sommer explains: “LiDAR technology is currently too expensive to be incorporated in mass production vehicles, but investments in the technology will bring economies of scale that will likely lower the costs to a manageable level.”

Solid-state technology improves performance and reduces costs

With laser emitters making up the greater part of the cost of LiDAR, several manufacturers are investigating solid-state technology that not only reduces cost but also improves performance. By utilizing low cost solid-state electronics, pulses are sent out in about a microsecond which allows for approximately a million points of data to be recorded every second. And because it’s all solid-state electronics, each pulse can be directed completely independently, sending out one pulse in one direction, and another pulse in a completely different direction just one microsecond later.

These first generation solid-state arrays are typically achieving up to 2.2 million data points per second, with a field of view of 120 degrees both horizontally and vertically. The minimum range is 10 centimeters, and the maximum range is at least 150 meters at 8 percent reflectivity. At 100 meters, the distance accuracy is +/- 5 cm, and the minimum spot size is just 9 cm.

By employing an optical-phased array as a transmitter - which guides pulses of light by shifting the phase of the laser pulse as it’s projected through the array - costs can be reduced to between $60 and $250.

LiDAR gets a vote of confidence

With Auto manufacturers all jostling to bring fully autonomous vehicles to series production, it comes as no surprise that Ford has committed to introducing a self-driving car with no steering wheel or brake pedal by 2021. To realise this objective Ford has teamed up with China’s leading search engine company, Baidu, to make a combined $150 million investment in Velodyne LiDAR, the recognized global leader in LiDAR technology. This investment will allow Velodyne to rapidly expand the design and production of high performance, cost effective automotive LiDAR sensors, accelerating mass adoption in autonomous and ADAS applications.

Following this announcement, the automotive industry may very well be looking at a historic moment in the evolution of ADAS and self-driving vehicles: A paradigm shift in cost and performance of Lidar that will make the technology accessible to all vehicle segments, not only top end luxury cars.


Company information according to § 5 Telemediengesetz
IQPC Gesellschaft für Management Konferenzen mbH
Address: Friedrichstrasse 94, 10117 Berlin
Tel: 49 (0) 30 20 913 -274
Fax: 49 (0) 30 20 913 240
E-mail: info@iqpc.de
Registered at: Amtsgericht Charlottenburg, HRB 76720
VAT-Number: DE210454451
Management: Silke Klaudat, Richard A. Worden, Michael R. Worden

Firmeninformationen entsprechend § 5 Telemediengesetz
IQPC Gesellschaft für Management Konferenzen mbH
Adresse: Friedrichstrasse 94, 10117 Berlin
Telefonnummer: 030 20913 -274
Fax: 49 (0) 30 20 913 240
Email Adresse: info@iqpc.de
Registereintragungen: Amtsgericht Charlottenburg HRB 76720
Umsatzsteuer- Indentifikationsnummer DE210454451
Geschäftsführung: Silke Klaudat, Richard A. Worden, Michael R. Worden