Zach Beasley is a Senior Manager at Zendar, responsible for marketing and business development. He is an alumnus of the University of Texas at Austin, with a background in multi-disciplinary design and computer science.
DVN: Hello, Zach. Can you tell us a little about Zendar?
Zach Beasley: At Zendar, we’re building a full-stack autonomous driving solution, designed to be more robust and affordable than today’s systems. Zendar achieves this through proprietary radar technologies that enable safer, more reliable driving automations with lower cost and power consumption than camera-centric or Lidar-centric systems. Right now the cost of these systems limits their adoption around the globe. We aim to unlock the next level of scalability and affordability to bring the safety and comfort benefits of ADAS to more markets and more people.
From the beginning, we believed radar would be a better sensor foundation for automated driving than its more costly rival, Lidar. However, sensor size and mounting constraints have historically limited radar resolution. Furthermore, the traditional point-cloud based approach to radar signal processing has held back advances in AI perception, as this offers limited data availability to AI. We first had to develop technologies to overcome radar’s limitations in angular resolution and object classification to unlock its full potential.
That vision led to the creation of our Distributed Aperture Radar (DAR), a technology that combines multiple standard radar units into a single, coherent system. DAR technology breaks the link between sensor size and performance, enabling a high-resolution, modular radar system using small, cost-effective sensors.
Building from there, we introduced Semantic Spectrum, an AI-driven perception layer that processes raw radar data to produce a high-precision object model of the environment. Together these two technologies have redefined what is possible with radar sensing in automotive, unlocking both new capabilities for radar and greater efficiency within the overall ADAS stack.
What began as a vision to reimagine radar has evolved into a scalable, software-defined perception technology—ready to meet the demands of autonomous mobility. We are now developing a full-stack solution with our semiconductor and tier-1 partners, complete with low-cost AI-enabled SoCs and state-of-the-art sensor hardware which will usher in a new era of affordability for ADAS technology.
DVN: What kinds of products are you developing?
Z.B.: We’re building automated driving solutions from basic ADAS features like automated emergency braking (AEB) and adaptive cruise control (ACC) to higher levels of autonomy (L2+ and L3).
Zendar’s automated driving solutions leverage a high-resolution distributed radar network (powered by DAR software) and AI perception software that detects, classifies, and tracks objects in real time (even stationary or occluded ones, which traditional radar often misses). Combined, these technologies enable robust, scalable perception for ADAS and autonomous driving, especially in poor visibility and complex environments.
Our entry level product is an automatic emergency braking solution which we expect to be the lowest cost solution to meet NHTSA’s 2029 AEB regulations (FMVSS 127). Zendar is also developing autonomy solutions for the L2+ and L3 markets which leverage Distributed Aperture Radar and Semantic Spectrum AI.
DVN: What challenges do you aim to solve, and to what degree?
Z.B.: The performance of a sensor technology can be defined by two things: the quality of the data it produces (front-end sensing), and what the system is able to do with that data (perception back-end). The key performance indicators in radar sensing are accuracy and the capacity to distinguish between two nearby objects, also known as angular resolution.
The industry standard resolution for medium-range radar is around 4-5° in azimuth, which limits a vehicle’s ability to distinguish objects, especially in dense or dynamic environments.
With Distributed Aperture Radar (DAR) we achieve an azimuth resolution of around 0.25° which is about 20x sharper than the medium-range radars commonly used in the industry for forward-facing applications.
The movement towards AI driven perception introduces a new paradigm for measuring the performance of radar, beyond classical resolution and accuracy specs. AI bypasses much of traditional signal processing, ingesting raw radar signals and outputting an object model of the world instead of a point cloud. While the point cloud model is defined by accuracy and resolution KPIs, the output of radar AI perception is measured similarly to camera-based AI, using precision and recall metrics which provides us an understanding of how well the system recognizes objects. We measure precision and recall of object detection across various classes of objects (cars, pedestrians, bicycles, etc) across various ranges, field of view, and scenario based groupings like driving through fog or highway versus urban drives.
DAR and Semantic Spectrum deliver enhanced perception across critical edge cases: nighttime pedestrian detection, adverse weather, occluded hazards, high-dynamic range situations, and static objects.
DVN: What is your competitive advantage versus other radar suppliers?
Z.B.: While many radar suppliers are building increasingly complex imaging radars to improve resolution, we are taking a software-first approach that avoids costly, specialized hardware. By using off-the-shelf automotive radar sensors in a modular coherent network, we keep costs low and integration simple. Zendar software then transforms these standard sensors into a high-resolution, intelligent perception system. Being inherently software-defined means our system is scalable, cost-efficient, flexible and upgradable over time.
DVN: Could Zendar’s technology offer a solution for the FMVSS 127 pedestrian AEB requirements, particularly the nighttime scenarios?
Z.B.: NHTSA’s latest regulation mandates pedestrian AEB systems to detect and respond to pedestrians in low-light and nighttime conditions. It also expands requirements to include a broader range of real-world scenarios: pedestrians crossing or walking along the vehicle path, emerging from occlusions like parked cars, and standing still. Systems must also operate effectively at speeds up to 45 mph for pedestrian scenarios, and up to 90 mph for vehicle-to-vehicle situations.
Camera-based systems often struggle in darkness, high-contrast lighting, and adverse weather. Traditional radar systems, while robust to these situations, tend to miss stationary or closely spaced objects due to low resolution.
Our system addresses both challenges. Radar isn’t affected by lighting or weather, and with Distributed Aperture Radar, we improve traditional radar resolution to be able to solve Pedestrian AEB challenges. Moreover, Semantic Spectrum AI solves the issue of stationary object blindness, which is ubiquitous with today’s medium-range radar sensors. This innovation enables a pathway to solving nighttime Pedestrian AEB test cases without the need for hardware upgrades as Semantic Spectrum is compatible with the radars already standard across the vast majority of models sold in the USA.
See this video, testing Semantic Spectrum AI against the FMVSS 127 AEB scenarios.
DVN: When will you be able to commercialize software with full validation for public roads? How will you validate?
Z.B.: Zendar anticipates SOPs starting as early as 2027 for Semantic Spectrum AI, with deployment of Distributed Aperture Radar following between 2028 and 2030, driven by new L3 platform releases.
Our first full-stack L2+ system is currently being developed in India in a collaboration with a leading local OEM and Tier-1. The anticipated SOP for this system will be in 2028 for an initial deployment for the highway operational design domain.
We validate our technology using both open source and proprietary data sets, and develop our products to meet all relevant functional safety standards. As part of our partnerships with OEMs and tier-1s, we go through rigorous testing to ensure safety and reliability of our solutions.
DVN: Are you collaborating with potential customers?
Z.B.: Yesm we have active collaborations with two premium German OEMs, a leading Indian OEM, and multiple global tier-1s.
In addition, multiple OEMs have approached us exploring options for a cost-effective solution to meet upcoming AEB regulatory requirements defined by NHTSA.
DVN: How did CES go for you? What were the benefits of attending CES for your company?
Z.B.: There’s no show quite like CES, and I’d say that it’s a key driver of Zendar’s success. We have been invited to showcase our innovations in radar software alongside our semiconductor partners for two years in a row now. The advantage of being partnered with a global semiconductor leader at CES cannot be overstated. This year we were featured in NXP’s booth as part of their technology showcase. High level decision makers at OEMs and tier-1s visit the showcase to see the latest innovations in chip technology, and Zendar’s software solutions are an example of what can be unlocked with NXP’s radar portfolio. It’s a win-win, as our semiconductor partners are able to demonstrate how their chips are enabling new possibilities in radar, while we are able to stand out from the noise and showcase the value of our technology with key decision makers.
DVN: Could your technology also be used on the likes of cameras and lidars?
Z.B.: Distributed Aperture Radar and Semantic Spectrum AI are radar-specific products. Radar has different physics than camera or lidar, therefore these technologies are not transferable to alternative sensing modalities. Zendar’s innovations and sensing and perception are built specifically for radar, though being software-defined we are compatible with a variety of radar hardware and high-performance compute platforms, not just a singular hardware device.
Zendar’s full-stack solution uses a fusion of radar and camera to effectively understand and navigate through the world. We do not believe lidar is necessary for safe and reliable driving automation, but it can be used in combination with radar and camera in our perception stack as well if an OEM opts for this kind of sensor fusion.
DVN: Do you think radar + camera perception systems can outperform lidars? If so, what are the relative merits?
Z.B.: Yes, radar + camera fusion can absolutely outperform lidar in many real-world driving scenarios. Lidar provides dense, high-resolution point clouds, but it has several limitations: it’s sensitive to weather, struggles with occlusions, lacks inherent velocity data, and remains costly to scale.
In contrast, radar performs reliably in poor visibility — whether it’s rain, fog, darkness, or glare — and is effective at detecting hazards even when occluded. We gain rich, low-latency spatial understanding of the environment from the radar, which is augmented by contextual understanding of road signs, traffic lights, and an understanding of color and fine detail from the camera.
The result is precision through fusion. With AI-driven radar + camera fusion, we combine radar’s robustness and efficiency with the camera’s fine resolution—creating a system that is highly accurate, while being more resilient and costing less than lidar-based systems.
DVN: Imaging radar hardware gets increasingly complex to reach adequate resolution in azimuth and elevation. Could distributed radar architecture be a solution?
Z.B.: Traditional imaging radars achieve higher resolution by packing antennas into a single sensor, which increases cost, sensor size, and power consumption — all of which make it difficult to integrate into vehicle designs. Alternatively, Zendar’s approach increases radar resolution without increasing sensor size in a software-defined approach. We use multiple standard radar sensors spaced apart on the vehicle as building blocks in a modular sensor network. We then fuse their signals to operate the sensors together as a single, coherent system, creating a much larger aperture without the increases in cost or power consumption that come with a large monolithic sensor.
This method allows high angular resolution in both azimuth and elevation without the need for expensive radar hardware. OEMs can design high-performance solutions using the same hardware they use on their entry-level models, creating a truly scalable platform and unlocking economies of scale through a simplification of the supply chain.

DVN: Performance of distributed radars depends on a high phase and frequency coherence between the radar units. How do you propose to synchronise?
Z.B.: With DAR, we construct a large coherent aperture in software in order to reach the level of angular accuracy and resolution needed for higher level of autonomy. It means that we want to be coherent enough to transmit from one sensor and receive on the other. This requires aligning the radar module at several levels: at the carrier frequency, the time and phase levels.
In monolithic radars, the coherence across the aperture is achieved through hardware-intensive synchronization. Whether the sensor is using single chip or cascaded chips, several signals are then shared within the HW: a local oscillator, or LO, signal, a reference clock and a frame trigger.
In a DAR architecture, the radars are physically separated. Therefore we need another way to synchronize the radar modules, in HW or in SW. There can be no shared LO signal, and we must find other ways to share a common reference clock and frame trigger.
At Zendar, we have developed synchronization approaches to achieve synchronization on each of these parameters. We synchronize radars over ethernet with a combined software and hardware approach. Coarse synchronization is achieved with hardware, at the tens-of-nanoseconds level. Further software synchronization brings that figure down under 1 nanosecond. We have also developed monitor routines to ensure the quality of the sensor alignment while operating.