1. DVN: Can you tell me a bit about your role at TomTom and your experience in the mapping industry?
TT: I lead the ADAS & Automated Driving market segment at TomTom, where I am responsible for developing and executing the go-to-market strategy for our portfolio of driver assistance and automated driving capabilities across all levels of automation. My team and I work very closely with automakers and Tier-1 suppliers to understand what they need from maps and location technologies and then translate that into concrete products and roadmaps.
2. DVN: TomTom has been making maps for over 30 years. What are some of the latest capabilities that TomTom offers auto OEMs in regard to navigation systems?

TT: TomTom has long been a pioneer in digital maps for navigation in the automotive industry. In the early days, our maps were essentially designed to do one thing: guide drivers to their destination. What has fundamentally changed is that today our maps are also built for machines, supporting ADAS and automated driving software.
The most exciting trend we see is that, in the next generation of vehicles where automation is central to the driving experience, the traditional separation between the navigation UI and the ADAS UI is disappearing. Instead of two parallel systems, drivers will experience one coherent, seamless interface.
TomTom’s navigation stack is evolving to enable exactly that. Our next-generation solution, called Surround Navigation, is built around two key pillars:
- Lane-level navigation on all roads, including complex urban environments.
- Supporting fusion of sensor perception and map intelligence, enabling OEMs to design UIs where what the sensors see and what the maps know are presented together in a unified, intuitive experience.
3. DVN: How have new technologies such as AI and Satellite imaging changed the way that maps are made?
TT: New technologies like AI have completely changed how maps are made, from the way data is collected to how often maps can be updated. In the past, mapping relied heavily on people going into the field, taking measurements, and then digitizing maps by hand. Now, AI can automate a large part of this process by analyzing huge amounts of data from a variety of sources, including satellites and perception-derived observations, to recognize road elements down to individual lane markings. This makes it much faster and cheaper to create detailed, up-to-date maps.
This is at the core of why TomTom last year launched Orbis, a radically new “map factory”, designed around an AI-driven, multi-source approach that allows us to create “live” maps that reflect what’s happening on the road right now.
4. DVN: Is lidar still used in mapping?
TT: Yes, Lidar is still used in mapping, but the way we use it has evolved. In general, lidar sensors are great at capturing very high-accuracy 3D data, which traditionally is needed to generate HD maps.
Survey vehicles equipped with lidar are still part of TomTom’s toolbox. What has changed is how we use them. In the past, we used lidar-equipped survey vehicles to capture virtually every kilometer of HD map coverage. That made HD maps expensive to produce and update, and difficult to scale across the entire road network.
Today we’ve moved away from a traditional HD map approach and with Orbis Maps, we pivoted to a multi-source model. We ingest data from many different inputs and only deploy survey vehicles where they really add value: in places where we don’t have sufficient information from other sources, or where a customer needs very high accuracy on a specific portion of the road network. This multi-source model helps to make the mapping process more scalable and affordable, while still keeping maps up to date and allowing us to extend high-quality coverage across all types of roads, from highways to urban streets and residential areas.
5. DVN: Are HD Maps still specified in new RFQs or is this technology being replaced by something else ? How are L2++ driving systems changing mapping requirements?
TT: What we see happening now is the deployment of a new generation of automated driving systems, especially for L2+ and L3 applications, that are much more AI-driven and run on significantly more computing power in the vehicle. That unlocks new capabilities and allows the system to take over more of the driving task, still under human supervision. You’ll often hear these described as “mapless” systems. In practice, that usually means they can achieve a certain level of performance without relying on traditional HD maps as a hard requirement. Most of these systems still use maps, just not in the same way, and not the kind of HD map used before.
In the previous generation of automated driving software, HD maps were an essential enabler: they were used to help sensors recognize road elements and to localize the vehicle very precisely in the lane. In today’s L2+ systems, AI has become very good at understanding what’s happening around the vehicle from the sensor data itself. Maps are shifting into a different role: they provide rich context that complements what the sensors see, helping with complex road layouts, anticipating what’s coming ahead and acting as a source of legal rules such as speed limits or access restrictions.
Because of this, demand is gradually shifting away from traditional, high-accuracy HD maps towards maps that are fresher, more affordable and offer broader coverage, down to urban roads. Maps like TomTom’s Lane Model, still offer far more detail than a standard navigation map, but they deliberately trade some of that very high accuracy, in favor of freshness, scalability and affordability. That’s what many of the new RFQs are asking for maps with the right level of detail and context to match the capabilities of modern L2+ driving systems.
6. DVN: Do Robotaxi and Trucks have different map system requirements?
TT: They do, and the differences start with the level of automation at which they operate.
When we talk about robotaxis and autonomous trucks, we’re usually talking about Level 4 automation systems. These are designed to operate without direct human supervision within a defined operational domain. Because there isn’t a human constantly “in the loop,” L4 systems tend to build in more redundancy and safety nets at every layer of the stack, including their map systems. That already sets their requirements apart from L2+ and L3 passenger cars, where the driver is still expected to oversee the system and take over when needed.
Robotaxis are typically deployed in dense urban areas within a geofenced zone. Almost all robotaxi operators still rely on highly detailed, high-accuracy HD maps at street level. Those maps need to be refreshed as frequently as possible, because urban roads change often. Many robotaxi companies also leverage the advantage of operating a fleet of advanced sensor-equipped vehicles, so they can continuously collect fresh map data within the areas where they run their service.
Autonomous trucks, on the other hand, focus on long-distance freight: highways, major arterials and key logistics hubs. They also still depend on HD maps, but with a more corridor-focused coverage rather than dense city grids.
7. DVN: Real-time traffic information and analytics are key to the driving experience, how is this changing in the autonomous world?
TT: Real-traffic information is changing from something that advises the driver about live road conditions, to something that actively drives decisions on behalf of the Autonomous Driving software. It can provide some of that context beyond what sensors can see.
In today’s world, real-time traffic is mainly about the driver experience: avoiding congestion, finding a faster route and getting a reliable ETA. In an autonomous world, that same information becomes an input to the driving brain itself. Traffic data and analytics feed into perception, prediction and planning: the system uses them to decide which lane to stay in, when to merge, how much distance to keep, and how to adapt its behavior to congestion, incidents or roadworks ahead, hence, not just to warn the human driver, but to act on this information.
This is also why nature of the data is changing too. Autonomous driving systems benefit from traffic information that is more granular and more reliable. This is why at TomTom we’re working to bring traffic information at lane-level, and to improve confidence levels for traffic accidents and road hazards.
8. DVN: What sort of sensor data is collected from autonomous vehicles and how is that used?
TT: Modern cars generate a vast amount of sensor data, and even more the next generation of automated driving vehicles. What we call “Sensor-derived Observations” are a key input to our new Orbis map factory.
These may include camera-derived observations (that help identifying lane markings, traffic signs and lights, road edges, crosswalks, barriers, etc.), position and motion data (like high-precision GPS/GNSS probe data to know exactly where the vehicle is and how it’s moving) and vehicle signals (such as speed, steering angle, lane change indicator use) which help us understand how the vehicle is actually driving through the road layout.
Those observations are of course aggregated and anonymized across many vehicles. In our map-making systems, we fuse and compare these observations with all other sources and with the existing map. If multiple sources report for example a changed road shape, that becomes a strong signal that the map needs to be updated.
We look at our sources and use AI and statistical methods to understand what has really changed, filter out noise, and then automatically update or enrich the map with new road elements and attributes.With this new multi-source approach, the map becomes a living product that is constantly refined by what we are seeing on the road every day.
9. DVN: Agentic AI is coming to cars. What is TomTom doing in that space?
TT: Agentic AI is absolutely coming to cars, and at TomTom we’re working on our TomTom AI Agent, or TAIA.You can think of TAIA as an in-car navigation expert that you can just talk to naturally.
Instead of tapping through menus and settings, you describe what you need, for example “I’m driving to Berlin tomorrow, I’d like to avoid tolls and I need two fast-charging stops” and TAIA can understand that, reason about it, and turn it into a smart plan.
What really matters is that TAIA goes beyond simply reacting to commands. It brings proactive route intelligence into the car: continuously monitoring traffic, incidents and hazards, and suggesting better alternatives when conditions change.
It can plan complex multi-stop trips and EV journeys in one fluid interaction, combining routing, charging and driver context, without the driver having to micromanage every step.
Crucially, this isn’t a future concept. TAIA is a product that’s available now. It’s designed to work seamlessly alongside any leading voice assistant, so OEMs don’t need to build and maintain a heavy, bespoke AI stack just for navigation.
10. DVN: What other new developments in mapping can we expect to see in the next few years?
TT: We’ve already gone through one big shift in mapping, and we’re right in the middle of the next one.
TomTom was one of the pioneers of HD maps for automated driving. Since then, we’ve seen the industry move towards crowdsourced, multi-source and AI-driven map production, a direction we also helped lead with our Orbis “map factory.” That shift is all about making maps fresher, more affordable and more scalable, especially in complex urban environments where change is constant.
Looking ahead, we see the next big development happening inside the car. AI systems on board will keep getting better at understanding the real world around the vehicle, not just recognizing objects, but also learning the dynamics, physics and spatial relationships that govern how traffic actually behaves. In other words, they’ll build increasingly reliable “world models” of their surroundings.
The exciting part is how those world models will blend with map intelligence. Maps will continue to provide a layer of knowledge complementary to perception while the vehicle’s AI contributes real-time, understanding of what’s happening right now. Together, they’ll create a richer, more reliable representation of reality around the car than either could alone.
As this happens, the role of maps will keep evolving. Qualities like freshness, reliability, modularity and the ability to plug into whatever perception or driving stack a manufacturer chooses will become even more important. Maps won’t disappear, they’ll become a more flexible, integrated part of the intelligence that makes automated driving possible.