A bicycle or pedestrian crossing the road, stopped vehicles, and debris are among the navigational challenges driverless autonomous vehicles must be able to sense and respond to, day or night, and in any weather condition. Software company Nodar is developing a camera-based solution they say provides better-than-lidar target identification, and at further distances.
CEO and cofounder Leaf Jiang started working with lidar at MIT’s Lincoln Laboratory more than 13 years ago. Now he thinks a two-camera solution provides better ‘vision’ for autonomous driving and ADAS. His COO and cofounder Brad Rosen says over the next decade he expects there will be a quarter of a billion L3 cars on the roads: “At the heart of all of those vehicles is the perception system, and really at the core of that is always going to be vision systems. Cameras are going to be a part of this. So we’ve doubled down on cameras, and we think that cameras are the way to deliver self-driving cars into the future”.
Nodar’s software solution, the Hammerhead product line, provides any-object detection with long range and high-resolution 3D data. It offers vision at more than 1,000 meters, can detect objects as small as 10 cm at 150 meters, and something like an upturned motorcycle at 350 meters.
Jiang says Nodar provides a combination of untethered stereo cameras, auto-calibration, and object detection with precise and reliable depth sensing and scene analysis—even at night and in low-visibility weather conditions.
One way of triangulating distance to everything in a scene is by comparing the left and right images. However, cameras are very sensitive to relative alignment. Nodar provides software that keeps cameras aligned no matter the distance they are spaced apart.
Rosen points out that there are two-camera systems that are already in the market, such as Subaru’s EyeSight driver assist technology. But the cameras in that system are close together. Wider camera placement, he says, is more advantageous, but with that comes the challenge of keeping them perfectly aligned. Nodar virtually aligns the cameras in software, which is enabled by Nvidia processors “and the incredible camera technology that has evolved and our patented algorithms”, says Rosen. “We untethered the cameras and we can mount them pretty much anywhere on the car”.
Nodar does not provide the cameras; their software is compatible with off-the-shelf cameras. The recent launch of their GridDetect completes the package and makes the dot cloud of data more presentable to the end user. GridDetect uses single processing and algorithms instead of artificial intelligence; Jiang says an AI approach requires training a system, while algorithms do not have to be trained. He says that means the system can better recognize unique objects, versus an AI system having been trained to detect specific objects.
Rosen says GridDetect provides ultra-precise, long-range sensing able to detect a 10-cm object at 150 meters; a 12-cm item at 172 meters, and a tire at 250 meters.
Nodar rented an airport for testing to make sure the ‘roadway’ was flat. They checked to see if the system could pick up 10 different objects at varying distances. The smallest, a 12-cm target, was recognized by GridDetect at 172 meters. GridDetect picked up larger objects, like traffic cones and cars, at 500 meters.
DVN comments
Stereo vision offers three-dimensional perception, high precision, robustness, low cost, flexibility, and real-time processing capability. These advantages make it a competitive technique for environmental perception in many applications, including autonomous vehicles & robotics. In the past, the deployment of such solutions for L2 AD systems, was hindered by a limited stability of camera alignment during the life of vehicle. Camera resolutions enhancements and new auto-alignment algorithms seem to have solved this problem. What of bad-weather situations, though? Time will tell!