By Martin Booth & Luc Bourgeois, DVN Sensing & Applications Senior Advisers
AEB systems (advanced emergency braking) started appearing in the early 2010s. They have improved over time as the technology has evolved, and they’ve become capable of handling a wide variety of road scenes. Euro NCAP ratings have made them essential for achieving a 5-star safety score. And the European GSR-2 (general safety regulation requires all new vehicles sold in Europe to have AEB since July 2024.
The most common perception configurations for these AEBs are:
- A single perception technology: camera or radar. This setup meets the performance and robustness level required by GSR2 as of July 2024, and can achieve up to 4 stars in the 2023 Euro NCAP protocol.
- Dual perception technologies: camera and radar, and real-time fusion of their information. This allows for a 5-star Euro NCAP rating, and gives the highest level of performance and robustness. This means detecting and avoiding collisions with moving or stationary vehicles, partially or fully in the path, up to about 65 km/h. The same goes for vulnerable road users—pedestrians, cyclists, and motorcyclists.
It is evident that AEB has been developed primarily for urban and peri-urban scenarios. Statistics show vehicles with AEB have about 40 per cent fewer accidents than those without. A typical set of scenarios to evaluate AEB for Euro NCAP includes:
Car to Car:
- Approaching a car crossing a junction
- Approaching a car head-on
- Turning across the path of an oncoming car
- Approaching a stationary car
- Approaching a slower-moving car
- Approaching a braking car
Pedestrian:
- Car reversing into adult or child
- Adult crossing a road into which a car is turning
- Child running from behind parked vehicles
- Adult along the roadside
Cyclist:
- Approaching cyclist crossing from behind parked vehicles
- Turning across path of an oncoming cyclist
- Approaching a crossing cyclist
- Approaching a cyclist along the roadside
Motorcyclists:
- Approaching a stationary motorcyclist
- Approaching a braking motorcyclist
- Turn across the path of an oncoming motorcyclist
After this initial phase of deploying AEB for over 15 years, we are now seeing an increase in the expected performance of AEBs with the Euro NCAP 2030 vision and FMVSS 127 from NHTSA for 2029, which will likely lead to an evolution in perception technologies and AEB systems.
FMVSS 127 requires an AEB system to avoid high-speed collisions: 100 km/h for stationary objects! This implies a need for long-range radar or lidar. The standard also requires systems to avert collisions with pedestrians during day and night up to 72 km/h without external lighting of the driving scene. This means the car must have a lighting system capable of illuminating the driving scene well enough to support that AEB performance, or it must be equipped with perception technology that guarantees pedestrian detection at night—lidar or infrared cameras.
To quantify the additional performance required of AEB by 2029, let’s consider the case of avoiding collisions with stationary vehicles up to 100 km/h and compare it to the current situation of AEB which avoid collisions up to an average of 65 km/h.
Under optimal adhesion conditions (µ = 1), it takes approximately 25 metres to stop a vehicle traveling at 65 km/h by applying a deceleration of 1 g. In these same µ=1 adhesion conditions, it will take about 50 metres to stop a vehicle traveling at 100 km/h (10 metres for braking system reaction time + 40 metres of deceleration at 1g).
Consequently, full braking must be applied approximately 1.8 seconds before impact, compared to 1.2 seconds today. This increase of 0.6 seconds is a challenge for the detection system’s performance, especially for its ability to keep from generating false positives that could lead to unintended braking.
So, the new performance requirements for AEB raise questions about the sensors’ performance and the best ways to fuse their information, while avoiding severe false positives.
Advance engineering phases are under way at suppliers and automakers to choose new configurations, and we will soon see the selections and compromises that ensure a favourable risk-benefit ratio to achieve the traffic accident reduction goals set by Euro NCAP and FMVSS 127.
NHTSA projects that the new standard will save at least 360 lives a year and prevent at least 24,000 injuries annually. A lot of those accidents occur at nighttime or other low visibility conditions.
The new standard requires cars to be able to stop and avoid contact with a vehicle in front of them at speeds up to 100 km/h, and apply the brakes automatically at up to 130 km/h when a collision with a lead vehicle is imminent. Pedestrians have to be avoided at up to 72 km/h, in both day and night conditions.
Today’s AEB systems typically use a camera and some sort of image processor plus an ‘AI’ detection algorithm—often part of the L2 driver assist system, using the same components. However, visible light camera-based systems do not perform well at night, or in fog, direct sunlight and other challenging weather conditions. The 0.2 lux requirement of the NHTSA mandate makes a stop from 45mph difficult (including the time to detect and classify objects)—vehicle headlights can only illuminate so far ahead.

According to Teledyne Flir, in VSI Labs test reports, only one of eight 2023-’24 U.S.-spec vehicles tested was able to pass all of the FMVSS 127 tests with the existing camera and radar sensors—which shows the general inadequacy of existing sensor suites to meet the new requirements.
RCCG colour filter arrays, deep trench isolation, composite metal grids, and other advanced CMOS processing techniques, along with advanced image processing, improve low-light sensor performance and dynamic range, but still do not offer an ideal solution.

Mobileye believe it possible to meet the new requirements with a camera-only approach, but this will depend strongly on vehicle headlight design and ‘AI’ processing capability.
Shortwave infrared (SWIR) sensors from vendors like Sony use an InGaAs photodiode layer bonded to a CMOS silicon substrate with the readout circuits. Achieving fine-pitch bonding (and therefore smaller pixels) is key to improved resolution. Single cameras that cover both the visible spectrum and SWIR are possible, but expensive. New CQD (colloidal quantum dot)-based sensors from companies like SWIR Vision Systems promise significant cost reduction by using more standard CMOS processing techniques, but SWIR may be more appropriate for in-cabin sensing, since the wavelength does not give significantly better detection capabilities than visible light in fog. SWIR also requires active illumination, which presents cost and power challenges for long-range detection.
Tri-Eye also have a CMOS-based HD SWIR sensor allowing 2D imaging and 3D mapping of the road in all visibility conditions, at much lower cost than lidar. It uses a 1,135-nm laser illuminator and need no InGaAs detector.

Lidar also has the potential to overcome cameras’ night vision performance limitations, because it uses active illumination. Costs in China are dropping, starting to approach $200 on the way down, but lidar units are still considered too expensive for standalone AEB systems today. As more auto automakers use lidar for L3 driving (or L2 in China), the lidar units can also be used for the AEB function. In fact, AEB is the main lidar function in China today.
FMCW lidar has the advantage of being able to instantaneously measure speed and distance, and has better adverse-weather performance versus ToF lidars. Perception accuracy is much better than camera systems, and lidar is better at classifying irregular objects. It also can reduce false activations, for example on steep slopes or with metal plates on the ground. Swiss-Re did a study on collision avoidance in cars with and without lidar, and found an improvement of at least 25 per cent with lidar enabled.
HD radar resolution is also improving, making this a promising alternative to lidar as a second sensor for L2+ driving and potentially the AEB function. HD radar also is not affected by lighting or weather conditions. New radars with air-waveguide antennas have a range of up to 300m and can detect objects like motorcycles at close to those distances. These 4D radars not only scan the horizontal plane, but also measure the height of objects using a 2D array of antennas, which helps reduce false positives from things like manhole covers. Two corner radars can offer a 250° field of view. Lidar can still outperform radar in many small-object detection scenarios, though, with a 10× improvement in angular resolution.
Another possible solution is the FIR (far infrared) or thermal camera. Traditionally these have been expensive and so confined to defence and surveillance applications, but improved microbolometer sensor technologies have allowed for cost reductions and better resolutions.
The Owl Thermal Ranger, for example, provides 1-megapixel resolution over the full LWIR spectrum at 120 fps, and doesn’t rely on active illumination. A chip stack allows the readout logic (ROIC) to be placed directly under the sensor, and an FPGA is typically used today for the digital logic. The Owl camera also does not need a traditional ISP; those functions can be performed in the ROIC and allows for shutterless operation.
Thermal cameras do not need lasers, scanning systems, or optical alignment. Owl believes a VGA-resolution thermal camera should cost under $90 by 2029 when the U.S. AEB mandate goes into effect, and an HD camera at < $250. Size is also smaller than lidar units, and total power can be less than 3W. Owl CEO Chuck Gershman told DVN his company “see a huge increase in interest in thermal cameras from the automakers since the new NHTSA regulation came out, and believe this will be an optimum solution for next generation ADAS designs”.

Thermal cameras traditionally required expensive germanium glass lenses to transmit the FIR wavelengths. Newer chalcogenide glass materials and wafer-scale moulding technology from companies like Umicore are also making significant strides in cost reduction here.

September 2024, VSI labs completed FMVSS 127 PAEB testing equipped with Teledyne FLIR’s latest automotive thermal camera and Prism™ ‘AI’ perception software. The LWIR camera could see multiple times further down the road than the headlights could illuminate, giving improved detection, less false positives and slower deceleration. LWIR is better at detecting the objects you want to brake for versus camera, radar or lidar, including animals on the road and works better than lidar in fog and heavy rain.


There are also unique integration challenges with lidar and IR cameras. Standard windshield glass does not transmit FIR wavelengths, so automotive windshield manufacturers like Saint-Gobain have developed glass with a special crystal area for the FIR camera—allowing standard and IR cameras to be mounted side-by-side.
It still remains to be seen what the optimum AEB solution will be. A camera-only solution with appropriate headlight design and ‘AI’ processing capability may be able to meet the standard. A low-cost SWIR or FIR (thermal) camera may be better for standalone applications, though, if the automaker is also providing L3 driving—then a camera + HD radar or camera + lidar solution may be a reasonable choice.
In all cases, the ‘AI’ software and processing to do pedestrian and other object detection and sensor fusion is also a key part of the overall solution. Processing (object detection and classification) at the sensor can reduce latency and improve scalability, but this is maybe more cost effective in the central computer, where sensor fusion is also done.
Owl, for example, can provide ROS outputs of classified objects inside the camera module, which lowers main-ECU performance and power requirements.