Brad Rosen COO of NODAR is a seasoned business executive and entrepreneur. With seven tech startups under his belt, Brad has a proven track record of taking ideas from inception to product market fit, to exit. Prior to NODAR, Brad started, ran, and sold Drync, a venture-backed B2B platform for retailers of beverage alcohol. Before Drync, Brad served as VP Product at Where, a mobile-first LBS company that was sold to eBay. Earlier in his career, Brad held roles at Cognio, a full-stack spectrum analysis system that was sold to Cisco, Ucentric Systems (sold to Motorola), and PureSpeech (sold to Philips). Brad has an Electrical Engineering degree from the University of Colorado and an MBA from MIT’s Sloan School of Management.
We had met and interviewed Brad last month to get more information about NODAR and its products:
DVN: NODAR is using 3D Vision Technology with advanced perception software to improve safety. Is there a specific stereo camera you are using for automotive applications today?
Brad: We design our perception stack to be camera-agnostic, allowing customers to Bring Your Own Cameras (BYOC). While we provide a fully integrated solution including automotive-grade cameras through our Hammerhead Development Kit (HDK 2.0), some customers integrate their own cameras with our software, providing flexibility across automotive, robotics, and infrastructure applications.
DVN: Which silicon partner did you develop your perception software with?
Brad: Our software is optimized to run on NVIDIA Orin AGX platforms, though we remain flexible to work with other customer-selected compute platforms as needed. This flexibility future-proofs deployments and supports a broad range of applications from automotive to industrial automation.
DVN: How accurate is the depth information at 1M, 10M and 100M?
Brad: At 1 meter our depth accuracy is within a few millimeters. At 10m we maintain centimeter level precision, and even at 150m our accuracy is typically within 0.2% of range — or just 30 centimeters. This level of precision is made possible by our online calibration algorithms and support for wide-baseline (distance between cameras) camera mounting. The farther apart the cameras are located, the better the accuracy and range. In practical terms, our software can reliably detect a 10 cm object at 150 meters or even an overturned motorcycle at 350 meters.

DVN: How far out can you identify an object like a palette lying on the freeway?
Brad: A standard NODAR system (5mp cameras, 1m baseline) can detect a 14cm wooden pallet at 200m. For larger or higher-contrast obstacles, such as vehicles, detection ranges can extend even further, while at closer ranges the system can detect much smaller objects. This long-range precision makes our solution ideal for safety-critical applications of autonomy requiring high-confidence object detection and longer ranges.
DVN: What sort of compute resources are required to run the perception stack?
Brad: The compute resources required depend on the requirements of the application. For instance, a high-speed autonomous vehicle traveling at 80mph will require more compute resources than a tractor traveling at 13mph. Resolution and frame rate are optimized given the ODD (operational design domain) and compute constraints of the target system. A full instance of the Hammerhead perception stack running at full resolution and frame rate outputs 50 million depth measurements per second utilizes a small portion of an NVIDIA Orin AGX. For applications needing lower resolution and/or framerate, NODAR supports lightweight configurations and can tailor solutions depending on performance requirements.
DVN: Has your software already been trialed by a major auto OEM or what are your plans to enter the automotive industry?
Brad: We are currently deploying the solution to production in several non-automotive environments. In the automotive sector, we are working with a select number of OEMs to support their L3 and automated parking initiatives. Unfortunately, in all cases we are bound by NDA’s that prohibit mentioning customer names.
DVN: How does the latency to detect an object (e.g., stationary car in your lane) at 100M compare to imaging radar and lidar?
Brad: NODAR will take about 50-75ms to detect an object the size of a car at 100m, while lidar will take 100-200ms, and imaging radar 150-300ms. NODAR is processing 50 million depth measurements per second, while high resolution lidar is between 1-5 million measurements per second. A lidar must pass its laser across the scene many times before having enough points on the target to have confidence there is an object there, while nodar will detect the object in every frame. Most imaging radars will have difficulty with a stationary object as it will be filtered out as “clutter”. Like lidar, the radar will require multiple returns (tracking cycles) from an object as well as time for classification.
DVN: Versus a single forward-facing camera system, how does the cost/benefit of NODAR’s stereo solution compare?
Brad: Compared to a monocular camera system that uses AI to estimate where objects are based on training data, NODAR’s stereo approach provides a physical measurement of true depth to any object, resulting in higher reliability and safety. While the monocular system has lower hardware cost with only 1 camera, it suffers in two important ways: 1) the mono camera can only estimate depth for objects and situations similar to those it has been trained on, often failing in edge-case scenarios, and 2) the neural networks are computationally intense and require decimation of the camera images, for example from 5 megapixels (2592 × 1944 pixels) down to 640×480 (VGA). In reducing image size, important data are lost and the images become “grainy”, making object identification more difficult.
In terms of system cost, the monocular camera system will typically require dedicated compute (GPU) to run its neural networks, while NODAR can share the car’s centralized GPU with other functions.
DVN: How does your auto-calibration work?
Brad: NODAR’s patented self-calibration system continuously calibrates stereo cameras in every frame, using image-based algorithms that detect and correct for sub-pixel misalignments caused by vibrations, thermal expansion, mechanical drift, or shocks. This real-time, software-only approach eliminates the need for external calibration targets or manual intervention, ensuring precise depth perception at long range, in low light, rain, or other challenging conditions. The result is highly robust, maintenance-free performance that remains stable across real-world automotive and industrial environments.
DVN: How does NODAR manage to overcome the mechanical limitations of stereo camera mounts in case of vibrations and heat deformations?
Brad: The range and precision of a stereo vision system are proportional to the distance between the cameras. By eliminating the mounts and placing the cameras wide apart, the range and precision are improved. NODAR overcomes the mechanical limitations of stereo camera mounts by allowing the cameras to be mounted independently and correcting for vibrations and shocks (like heat deformations) in software, on every frame. The NODAR online calibration system is so fast that it is able to compensate for vibrations from a cobblestone road or the engine of a class 8 truck in realtime, ensuring accurate depth measurements.
DVN: How does performance in bad weather compare to traditional camera-based systems?
Brad: In challenging conditions like rain, snow, and fog, NODAR’s stereo depth maps outperform traditional monocular vision systems. Poor visibility means a reduction in valid visual data on which to perform measurements. Because monocular systems require that each camera image be reduced significantly in size to allow the neural networks to run fast enough, valuable visual data is lost, making already degraded images worse. In other words, using images already visually degraded by poor weather and then further degraded by the system to reduce resolution, makes object classification by traditional monocular systems in inclement weather challenging. In contrast, NODAR uses each high resolution image, with up to 8 megapixels per image to estimate depth and detect objects. While degradation caused by environmental conditions exists, sufficient data remains in each image for NODAR’s algorithms to accurately perform its functions.
DVN: How does performance in bad weather compare to Lidar-based systems?
Brad: The performance of lidar systems degrades substantially in the presence of airborne particulates, such as rain drops, snow, fog, and dust. One reason for this dropoff in performance is that the photons must successfully pass through the cloud of particulates going to and from each object in the scene without being reflected, refracted, or absorbed by the particulates in the air.
NODAR is a passive sensor, meaning that it only receives photos from the scene the camera imagers are “looking” at. With the ability to receive 10x more photons than a lidar, and not being subject to the interference of outbound photons as lidar is, a nodar system performs 2-3x better than a lidar in the face of low visibility.