December 5, 2016


Touchless vehicle apps know what you want, when you want it
By Sivakumar Yeddnapuddi

Today’s cars and trucks are smart, but they’re not smartphones. Potentially, we can choose from millions of apps on our phones, just by touching an icon. If we tried to do all the things we want to do in our cars using apps, we’d be frustrated, because we can’t safely select them while driving. Automakers have been limited to the native applications built into cars—like Bluetooth and USB ports – relying on a passenger to use a phone to find the nearest gas station or restaurant if the vehicle didn’t have built-in navigation.

A new developer-friendly application platform from Visteon – called Phoenix – solves this problem and propels smart vehicle infotainment systems to the head of the class. This web-based infotainment platform “stitches” together apps native to the car with apps from third parties. Application integration is performed via recipes that enable the appropriate apps at the ideal time contextually – without the driver needing to touch anything.

Case in point: As a driver enters the vehicle, a customized startup feature automatically indicates available content from three different apps:  an audible message lists the day’s meetings, the weather forecast and traffic conditions for the anticipated route  

The driver does not need to use a phone, speak voice commands or input commands to a touch screen; all required information is automatically displayed on one screen.

Similarly, today two separate apps and two steps are required to open a garage door remotely and to show the vehicle’s position in relation to the garage entrance via GPS. Phoenix stitches these apps together so that the garage door automatically opens as the vehicle approaches.



Phoenix is easy to use since it complies with open standards such as W3C and GENIVI and is designed with app developers in mind. This platform lets developers build applications using HTML5 along with rich JavaScript based application programming interfaces (APIs). This eliminates the need to rewrite applications when porting to other infotainment systems.

Furthermore, Visteon offers a software development kit (SDK) with libraries of code, documents and a simulator. The Phoenix SDK makes development easier than conventional, often disjointed methods that require custom software or hardware and lack third-party tools – thus increasing cost and time. With Phoenix, the developer creates and tests the app with the SDK and simulator; the app is then validated by the automaker or Visteon and published to an app store. Phoenix is the first platform for vehicle apps to incorporate HTML5 and an SDK.

The Phoenix platform also advances the capability to update in-vehicle apps over the air, whether at a dealership lot or in the driveways of individual owners. For the first time, automakers can securely update just one portion of an app, using Visteon’s proprietary block-and-file technology, rather than needing to upgrade the entire system.

By 2020, when vehicle-to-vehicle (V2V) communication will be more common, vehicles will have the capability to display infotainment on screens from 12 to 17 inches in size, compared with today’s 7- to 8-inch screens. Phoenix will enable developers to create content that optimizes these larger screens, making them more useful for drivers and improving the driving experience.

Sivakumar Yeddanapudi is a platform leader managing the Phoenix program. He also develops infotainment platforms that incorporate the latest technologies such as advanced HTML5 HMI framework, Web browser, cybersecurity, reflash over air and vision processing for cockpit electronics.

He has more than 15 years of automotive experience and served as software developer, technical professional for audio and infotainment software, and now as platform leader, located at Visteon’s headquarters in the U.S.


November 3, 2016


Machine Learning Algorithms in Autonomous Cars

Machine learning algorithms are now used extensively to find solutions to different challenges ranging from financial market predictions to self-driving cars. With the integration of sensor data processing in a centralized electronic control unit (ECU) in a car, it is imperative to increase the use of machine learning to perform new tasks. Potential applications include driving scenario classification or driver condition evaluation via data fusion from different internal and external sensors – such as cameras, radars, lidar or the Internet of Things.

Anshul Saxena, software expert at Visteon's technical center in Karlsruhe, Germany, provides a technical review of the use of machine learning algorithms in autonomous cars, and investigates the reusability of an algorithm for multiple features.

The applications running a car's infotainment system can receive information from sensor data fusion systems and have, for example, the ability to direct the vehicle to a hospital if it senses that something is wrong with the driver. This machine learning-based application can also incorporate the driver’s gesture and speech recognition, and language translation. The algorithms can be classified as a supervised algorithm and an unsupervised algorithm. The difference between the two is how they learn.

Supervised algorithms learn using a training data­set, and keep on learning until they reach the desired level of confidence (minimization of probability error). They can be sub­-classified into classification, regression and dimension reduction or anomaly detection.

Unsupervised algorithms try to make sense of the available data. That means an algorithm develops a relationship within the available data set to identify patterns, or divides the data set into subgroups based on the level of similarity between them. Unsupervised algorithms can be largely sub­-classified into clustering and association rule learning.

There is now another set of machine learning algorithms called reinforcement algorithms, which fall somewhere between supervised   and unsupervised learning. In supervised learning, there is a target label for each training example; in unsupervised learning, there are no labels at all; and reinforcement learning has sparse and time-­delayed labels – the future rewards.


Based only on those rewards, the agent has to learn to behave in the environment. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithm's merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can potentially address, ranging from problems in artificial intelligence to operations research or control engineering – all relevant for developing a self­-driving car. This can be classified as direct learning and indirect learning.

One of the main tasks of any machine learning algorithm in the self­-driving car is continuous rendering of the surrounding environment and the prediction of possible changes to those surroundings. These tasks are mainly divided into four sub-­tasks:
  • Object detection
  • Object Identification or recognition Object classification
  • Object localization and prediction of movement

Machine learning algorithms can be loosely divided into four categories: regression algorithms, pattern recognition, cluster algorithms and decision matrix algorithms. One category of machine learning algorithms can be used to execute two or more different sub­tasks. For example, regression algorithms can be used for object detection as well as for object localization or prediction of movement.


Regression Algorithms
This type of algorithm is good at predicting events. Regression analysis estimates the relationship between two or more variables, compare the effects of variables measured on different scales and are mostly driven by three metrics, namely:
  • The number of independent variables
  • The type of dependent variables
  • The shape of the regression line.

In ADAS, images (radar or camera) play a very important role in localization and actuation, while the biggest challenge for any algorithm is to develop an image-­based model for prediction and feature selection.

Regression algorithms leverage the repeatability of the environment to create a statistical model of the relation between an image and the position of a given object in that image. The statistical model can be learned offline and provides fast online detection by allowing image sampling. Furthermore, it can be extended to other objects without requiring extensive human modeling. As an output to the online stage, the algorithm returns an object position and a confidence on the presence of the object.

These algorithms can also be used for long learning, short prediction. The type of regression algorithms that can be used for self­-driving cars are Bayesian regression, neural network regression and decision forest regression, among others.

Pattern Recognition Algorithms (Classification)
In ADAS, the images obtained through sensors possess all types of environmental data; filtering of the images is required to recognize instances of an object category by ruling out the irrelevant data points. Pattern recognition algorithms are good at ruling out these unusual data points. Recognition of patterns in a data set is an important step before classifying the objects. These types of algorithms can also be defined as data reduction algorithms.

These algorithms help in reducing the data set by detecting object edges and fitting line segments (polylines) and circular arcs to the edges. Line segments are aligned to edges up to a corner, then a new line segment is started. Circular arcs are fit to sequences of line segments that approximate an arc. The image features (line segments and circular arcs) are combined in various ways to form the features that are used for recognizing an object.

The support vector machines (SVM) with histograms of oriented gradients (HOG) and principle component analysis (PCA) are the most common recognition algorithms used in ADAS. The Bayes decision rule and K nearest neighbor (KNN) are also used.

Clustering
Sometimes the images obtained by the system are not clear and it is difficult to detect and locate objects. It is also possible that the classification algorithms may miss the object and fail to classify and report it to the system. The reason could be low-resolution images, very few data points or discontinuous data. This type of algorithm is good at discovering structure from data points. Like regression, it describes the class of problem and the class of methods. Clustering methods are typically organized by modeling approaches such as centroid-­based and hierarchical. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. The most commonly used type of algorithm is K-­means, Multi-­class Neural Network.

Decision Matrix Algorithms
This type of algorithm is good at systematically identifying, analyzing, and rating the performance of relationships between sets of values and information. These algorithms are mainly used for decision making. Whether a car needs to take a left turn or it needs to brake depends on the level of confidence the algorithms have on the classification, recognition and prediction of the next movement of objects. These algorithms are models composed of multiple decision models independently trained and whose predictions are combined in some way to make the overall prediction, while reducing the possibility of errors in decision making. The most commonly used algorithms are gradient boosting (GDM) and AdaBoosting.

As a software expert, Anshul is involved in the development of SmartCore™ and autonomous driving domain controller platforms. He is focused on self-driving car technologies and the effect of the Internet of Things on the auto industry. Anshul is based in Karlsruhe, Germany. 

September 8, 2016

In the Eye of the Beholder: A Biometric Approach to Automatic Luminance Control
By Paul Weindorf, Display Systems Technical Fellow, Visteon


Every driver these days depends on an array of displays on the instrument panel, in the center stack, in the mirror and even through head-up displays (HUDs). Since drivers rely on these displays for critical data, it’s vital that this information be clearly visible, whatever the light levels inside and outside the vehicle may be. Variances in light factors such as sunlight and passing or constant shadows, however, sometimes can make the task of reading the displays challenging.

In some instances, the sun may be shining directly on the display, resulting in reflections that overwhelm the displayed information. In other cases, the driver may be looking out the front windshield into very bright sunlight and then attempt to glance at the instrument cluster without enough time for his or her eyes to adjust to the interior ambient light, again producing a temporary problem seeing the display information.

Automakers are becoming aware that constantly running displays at their highest luminance levels accelerates image burn-in on modern OLED screens and makes cooling the screens more difficult. Cranking up the brightness also draws a lot of power, impacting battery life, especially in electric vehicles.

Display developers are testing silicon light sensors, with one pointing forward, out the windshield and others mounted on the corners of each display to detect ambient light that falls on the screen. These detectors automatically adjust the luminance of the displays to make them brighter or dimmer as lighting conditions require, immensely extending the life of OLED screens and keeping them cooler.


Visteon’s dual OLED display features auto luminance – which adjusts display brightness depending on surrounding conditions


More recently, however, Visteon has proposed a different, more accurate method of automatic luminance control: measuring the constantly changing diameter of the driver’s pupils to determine the appropriate brightness levels of displays. This method, called a total biometric automatic luminance control system, replaces silicon sensors with an infrared eye-gaze camera that precisely determines pupil diameter.

When the driver is looking outside on a sunny day, his or her pupils will contract; when looking at the cockpit instruments, the pupils grow larger. Using the science of “pupillometry” – first applied in the fields of psychology and medicine – the camera detects where the driver is looking and determines the brightness outside and inside the vehicle. The display system automatically adjusts its luminance based directly on the driver’s eye response to light, rather than on input from sensors.

At this year’s national Society for Information Display (SID) conference, our Visteon team discussed how pupillometry can be used to determine luminance value when the driver is looking outside. At the SID Detroit chapter conference later this month, I will propose using pupil-diameter measurements to determine the reflected luminance from the front of the display. The latter is a more difficult issue because, when the driver glances from the road to the instrument cluster, the eye adapts to the dimmer light in an exponential fashion over a 10-second period, requiring an algorithm to determine what the final luminance value should be after the eyes have completed their adjustment.

The primary value of potentially using this biometric system instead of silicon detectors is its straightforward accuracy. When silicon sensors are employed, they are positioned at the corners of the display. Depending on the lighting and shadow conditions, they may not be correctly sensing the true reflected luminance. The biometric approach measures what the human eye actually is seeing off the front of the display. When examining what the driver is gazing at outside, silicon sensors look forward within a particular field of view, but the driver may be looking toward the left or right. The biometric eye-gaze camera uses the glint on the eyes from the infrared emitter to figure out which direction the eyes are looking and to adjust the display luminance based on what may be a greater or lesser intensity than the straightforward field of view.

Another advantage of biometrics is that it allows designers to remove sensors from the display and avoid the need for a forward-looking sensor, providing a sleeker and more pleasing appearance. Furthermore, eye-gaze cameras are now being placed in cars for other purposes, such as to detect drowsiness, and the same camera can also drive automatic luminance control, at no additional cost. An eye-gaze camera can be used to adjust the luminance of projected HUD displays automatically, as well.

The Visteon team’s next step is to build a physical model of a biometric luminance control system based on these principles and technologies. Ultimately, such technologies will allow displays to adjust to the absolute light levels in and around the vehicle, as well as to the driver’s perception of those levels throughout the journey. This concept promises another eye-popping advancement from Visteon for tomorrow’s cars and trucks.


Paul Weindorf is a display technical fellow for Visteon with more than 35 years of experience in the electronics industry. He currently supports display system activities for production, development and advanced projects. His interest lies in the display visibility arena and he participated in the SAE J1757 committee. Weindorf graduated from the University of Washington with a bachelor’s degree in electrical engineering.