June 28, 2017



Current sensor data fusion architectures: Visteon’s approach

Sensor fusion is a critical requirement in creating an autonomous vehicle’s “brain,” ensuring it can make intelligent, accurate and timely decisions based on behaviors of other traffic participants. Sensor fusion takes the inputs of different sensors and sensor types and uses the combined information to perceive the environment more accurately. This results in better and safer decisions than independent systems can achieve.

An increasingly complex range of on-board smart sensors like cameras, radar, ultrasonic, infrared, LiDAR and connected sensors for V2X communications, Wi-Fi, 5G, GPS and telematics provides both real-time and rich data that the connected car must be able to process to create a picture of the environment around the car at any moment in time.

To achieve fully autonomous driving – SAE Level 4/5 – it is essential to make judicious use of this sensor data, which is only possible with multi-sensor data fusion. By fusing sensor data, the vehicle forms a more accurate and reliable view of its environment and will have intelligent situational awareness. Sensor data fusion acts as a human brain, providing the car with a complete and accurate picture of its surroundings. 

The techniques that surround multi-sensor data fusion make up a very big and complex topic. Data processing techniques that associate, aggregate and integrate data from different sources help the system to build knowledge about certain events and environments, which is not possible using individual sensors separately. 

There are many ways to fuse the sensor data, and multi-sensor data fusion can be classified in two categories: Homogeneous and heterogeneous sensor data fusion. In homogeneous sensor fusion, sensor data of the same type are fused together, whereas in heterogeneous sensor fusion, data from different sensors are fused together based on the time of arrival of data (synchronization of data based on time stamps).

Multi-sensor data fusion can be performed at four different processing levels, according to the stage at which the fusion takes place: signal, object, feature, or decision level.



Currently all automotive players, be they automakers or Tier 1 suppliers, are extensively using feature-level fusion and decision-level fusion for multi-sensor data fusion. The benefit of this approach is the simplification of the sensor data fusion system as the smart sensors provide the list of features the system needs to make decisions. Almost all Level 2 autonomous systems are now making use of this approach.

But to achieve a Level 4/5 autonomous system, feature level fusion is not sufficient due to its limited sense of environment and loss of contextual information. The application of artificial intelligence (AI) will also have limitations when working on individual sensor data to obtain a feature list because of inherent flaws in this fusion approach.

Visteon is currently working on a signal-based multi-sensor data fusion approach. In this centralized architecture, sensor data from different sources are fused using AI techniques at a raw signal level. The major advantage of this approach is the availability of complete environmental information to the system which, with the help of machine learning techniques, will make informed decisions right from the start of a journey and, subsequently, be able to safely direct the car.

The safe and successful introduction of autonomous driving depends on a robust solution to sensor data fusion – creating a “brain” for the car’s automated body that is able to process huge data sets, apply AI and rapidly transmit information to and from its environment. Visteon’s approach is based on integrated and centralized multi-sensor fusion technology that is built on fault-tolerant hardware and incorporates AI for the most accurate levels of object detection and classification. 

As a lead software engineer, Anshul is involved in the development of SmartCore™ and autonomous driving domain controller platforms. He is focused on self-driving car technologies and the effect of the Internet of Things on the auto industry. Anshul is based in Karlsruhe, Germany.