Ralf J. Muenster is Vice President of Business Development and Marketing at SiLC. With over 20 years in high-tech business growth and commercialization, his past roles include Director at Texas Instruments’ CTO office and leadership positions at National Semiconductor, Micrel, and AMD. He founded a successful startup and conducted photonics research at UC Berkeley, and holds a master’s degree in physics and several US patents.
DVN-Lidar: How did SiLC come to be?
Ralf Muenster: SiLC Technologies was officially founded in June 2018, but its roots extend back over three decades. Our team is a seasoned group of professionals with a history of successful startups and a proven ability to bring products from concept to market, including scaling up production. The core founding team, led by our CEO Mehdi Asghari and VP of R&D Jonathan Luff, began their journey in the field of silicon photonics at Bookham, the pioneering Silicon Photonics company based in England, back in the 1990s. They played a pivotal role in leading Bookham to its multi-billion IPO in 2000. They subsequently joined the executive team at Kotura, a Los Angeles-based company, to further apply this groundbreaking technology, which was acquired by Mellanox in 2013. This makes the SiLC team one of the most experienced teams globally in commercializing silicon photonics products. SiLC has also developed a strong intellectual property portfolio and a proprietary manufacturing process for high-performance optical components.
DVN-L: What industrial sectors are you targeting, with your solutions?
R.M.: SiLC targets a diverse range of vertical market segments with its groundbreaking silicon photonics solutions, designed to equip machines with human-like vision capabilities. These sectors include:
Mobility and ADAS: SiLC’s technologies are crucial for the development of autonomous vehicles and ADAS. Our solutions provide these systems with the ability to perceive depth and motion much like the human eye, enhancing both autonomous navigation and driver assistance.
Robotics: SiLC is a significant player in the robotics revolution, particularly in automating warehousing and logistics. Our Eyeonic Vision Sensor enables robots to perceive their environment in a nuanced manner, akin to human vision. Labor shortages have led to a surge in demand for industrial robots, with over 3 million already in operation worldwide.
Smart Cameras: Our solutions can be integrated into smart camera systems to improve performance and add new functionalities. Equipped with SiLC technology, these cameras capture more than just images; they understand depth and motion in a scene.
Security: In the security sector, SiLC’s technologies offer more accurate and reliable monitoring and detection capabilities. Our sensors can differentiate between a wide range of objects and movements, closely resembling human perception.
DVN-L: Your photonics are designed for FMCW lidars. Could you explain more technical details of your chips?
R.M.: Our silicon photonics solutions, tailored for Frequency-Modulated Continuous Wave (FMCW) lidars, stand out in a crowded lidar market. While many companies venture into lidar technology, only a select few have the capability to develop FMCW or coherent lidar, often hailed as the ultimate solution. The primary challenge in realizing coherent lidar lies in the intricacy and cost of the required optical components. The solution to this challenge is integration. However, most existing integration platforms, designed for datacom applications, fall short in meeting the stringent performance levels demanded by coherent lidar. Unlike datacom applications that operate with digital signals and have link budgets in the 20-30 dB range, lidar is inherently analog. It doesn’t allow for noise-free regeneration, and amplification can adversely affect the Signal-to-Noise Ratio (SNR). lidar typically operates with a 80 – 100 dB link budget, making it significantly more challenging.
To address these challenges, SiLC employs a unique integration platform. Our process is more of an analog silicon photonics process, optimized across 10 or more parameters crucial for FMCW. This results in performance metrics that are 10-100× superior to what our competitors can achieve. Essential attributes like super low losses, minimal noise, and high-power handling capacity are integral to our platform, ensuring the SNR and dynamic range necessary for practical long-range applications.
What truly sets us apart is our manufacturing approach. We produce our wafers in Japan using a proprietary process that remains inaccessible to our competitors. This unique process ensures that we can deliver the performance levels essential for FMCW lidar systems across various industries.
Moreover, our silicon photonics chips are highly integrated, featuring lasers, amplifiers, detectors, meters of waveguides, and even solid-state beam steering. This level of integration further enhances the performance and versatility of our solutions.
DVN-L: Could you tell us more about your most recent innovative products?
R.M.:At CES 2022, SiLC proudly unveiled the Eyeonic Vision Sensor, distinguishing itself as the market’s sole chip integrated FMCW lidar transceiver. Then at CES 2023, we introduced the groundbreaking Eyeonic Vision System, which was honored with the CES Innovation Award. This system is not just another lidar solution; it boasts the highest resolution, precision, and range in the industry. Uniquely, it’s the only lidar system that provides polarization information, enhancing material detection and surface analysis capabilities. The Eyeonic Vision System is designed to deliver unparalleled visual perception, capable of identifying objects even beyond a kilometer’s distance. It ensures eye safety, operates without interference from other users or ambient light, and its polarization intensity data further aids in material detection.
Currently, our portfolio showcases four specialized versions of the Eyeonic Vision System, catering to different needs: short-range (for range detection of up to 50 meters), medium-range (up to 150 meters), long-range (up to 300 meters), and ultra-long-range (beyond 1250 meters). Each variant is meticulously crafted to address specific application requirements, ensuring that we offer solutions that align with the diverse needs of the industries we serve.

DVN-L: What do you see as the advantage of a 4D point cloud (fourth dimension is relative speed) for objects detection and tracking?
R.M.: The addition of a fourth dimension, relative to speed or velocity, to the traditional 3D point cloud in object detection and tracking offers a multitude of advantages. This 4D point cloud enables more robust and dynamic environmental understanding, significantly enhancing the performance of systems in applications like autonomous vehicles, robotics, and security.
Firstly, the velocity data allows for immediate differentiation between static and moving objects, which is crucial for real-time decision-making. For example, in an autonomous driving scenario, the system can instantly determine whether an object in the vehicle’s path is stationary, like a traffic light, or moving, like a pedestrian or another vehicle. This helps in making more informed and quicker decisions, such as whether to slow down, stop, or maneuver around an object.
Secondly, the velocity information aids in predictive analytics. Knowing the speed and direction of an object can help the system anticipate future positions of that object, thereby allowing for more proactive and intelligent responses. This is particularly useful in high-speed or rapidly changing environments.
Lastly, the 4D point cloud simplifies object tracking over time. Traditional 3D point clouds may require complex algorithms to track object movement from one frame to the next. The inclusion of velocity data makes this process more straightforward, reducing computational load and improving system efficiency.
In summary, the 4D point cloud not only enriches the data set but also enhances the system’s ability to understand its environment to make real-time decisions, and predict future events, thereby making it a game-changer in object detection and tracking.
DVN-L: What does the cost of your silicon photonics look like versus other solutions?
R.M.: SiLC’s silicon photonics technology significantly impacts the cost structure of lidar systems. Our chips integrate multiple functionalities—lasers, amplifiers, detectors, waveguides, and even solid-state beam steering—onto a single chip, reducing the number of discrete components and lowering overall system cost. The use of silicon wafer processes similar to CMOS imaging chips allows for high-volume manufacturing, further driving down per-unit costs. Additionally, the integrated nature of our chips simplifies assembly and calibration, reducing manufacturing expenses. Our unique, proprietary manufacturing process, carried out in Japan, ensures high performance while maintaining cost competitiveness, providing us with a distinct advantage over competitors.
In summary, SiLC’s silicon photonics technology offers a compelling cost advantage by integrating more functions onto a single chip, enabling high-volume manufacturing, and simplifying both assembly and long-term maintenance. This makes our lidar solutions not only technologically superior but also economically viable for widespread adoption.
DVN-L: Your company recently announced a partnership with Indie Semiconductor. Will you tell us more about that?
R.M.: The partnership between indie Semiconductor and SiLC Technologies is a strategic alliance aimed at revolutionizing the lidar landscape. By integrating indie’s Surya SoC with SiLC’s Eyeonic Vision Sensor, we’ve created the industry’s most compact and high-performing coherent vision system. This collaboration offers a 10× advantage in performance, power, and cost, setting new benchmarks for lidar technology.
The Surya SoC brings software-defined, high-performance analog and digital processing to the table, while our Eyeonic Vision Sensor offers unparalleled integration, resolution, and precision. Together, we provide a complete 4D FMCW imaging solution that’s ready for mass-market deployment across various applications, including driver assistance, autonomous mobility, and industrial automation.
This partnership has already garnered attention from lead automotive and industrial customers, and we’re actively developing new reference platforms to showcase the scalability and flexibility of our combined technologies. It’s a collaboration that not only meets the current market demand but also anticipates future needs, ensuring that we stay ahead of the curve in next-generation sensing applications.
DVN-L: How do you foresee the deployment ramp-up of automotive lidars over the next few years?
R.M.: The current automotive lidar market is in its infancy, characterized by limited volumes and the predominant use of a first-generation lidar technology known as Time of Flight (ToF). This scenario is reminiscent of the early days of radar deployment in the automotive sector. As radar technology became more widespread in vehicles, interference issues arose among pulsed radars. This interference problem ended up costing automakers hundreds of millions of dollars, prompting a shift to FMCW technology. Just as chip integration played a pivotal role in significantly reducing the costs of radar systems, making them a common feature on today’s roads, we anticipate a similar trajectory for lidar. As the industry progresses, we expect chip-integrated solutions to drive down the costs of lidar systems, paving the way for broader adoption. Over the next few years, as the limitations of ToF become more apparent and the benefits of FMCW technology are realized, we foresee a significant ramp-up in the deployment of automotive lidars, eventually becoming as ubiquitous as radar systems are today.
DVN: What are the ambitions of SiLC for the next few years?
R.M.: SiLC is laser-focused on pioneering the future of silicon photonics and FMCW lidar solutions. Our mission is to empower machines with human-like vision capabilities, revolutionizing sectors from autonomous vehicles to industrial automation. We’re actively pushing for broader market adoption of our Eyeonic Vision Sensor and System, both of which deliver unmatched depth, velocity, and polarization data. A pivotal advancement we’re pursuing is the integration of solid-state beam steering, aiming to phase out mechanical scanners for enhanced reliability and performance.
We’re actively collaborating with industry leaders in each vertical market that we address, aiming to integrate our Eyeonic Vision Chip into their products. Our partnership with indie Semiconductor exemplifies this approach. As the lidar market matures, we’ll continue to innovate, setting new standards in performance and cost-efficiency.
With accolades from Frost & Sullivan and Gartner, we’re well-positioned for global expansion. Our ultimate goal is to make our Eyeonic Chip the default choice for coherent vision sensors, offering a compact, cost-effective, and energy-efficient solution.